计算机科学
移动边缘计算
边缘计算
比例(比率)
分布式计算
GSM演进的增强数据速率
人机交互
人工智能
量子力学
物理
作者
Bingkun He,Haokun Li,Tong Chen
摘要
Abstract In the last few years, the rapid advancement of the Internet of Things (IoT) and the widespread adoption of smart cities have posed new challenges to computing services. Traditional cloud computing models fail to fulfil the rapid response requirement of latency‐sensitive applications, while mobile edge computing (MEC) improves service efficiency and customer experience by transferring computing tasks to servers located at the network edge. However, designing an effective computing offloading strategy in complex scenarios involving multiple computing tasks, nodes, and services remains a pressing issue. In this paper, a computing offloading approach based on Deep Reinforcement Learning (DRL) is proposed for large‐scale heterogeneous computing tasks. First, Markov Decision Processes (MDPs) is used to formulate computing offloading decision and resource allocation problems in large‐scale heterogeneous MEC systems. Subsequently, a comprehensive framework comprising the "end‐edge‐cloud" along with the corresponding time‐overhead and resource allocation models is constructed. Finally, through extensive experiments on real datasets, the proposed approach is demonstrated to outperform existing methods in enhancing service response speed, reducing latency, balancing server loads, and saving energy.
科研通智能强力驱动
Strongly Powered by AbleSci AI