计算机科学
强化学习
移动边缘计算
GSM演进的增强数据速率
服务器
任务(项目管理)
分布式计算
参数化复杂度
边缘计算
人工智能
计算机网络
算法
管理
经济
作者
Ting Wang,Yuxiang Deng,Youjian Zhao,Yang Wang,Haibin Cai
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-03-15
卷期号:11 (6): 10754-10767
标识
DOI:10.1109/jiot.2023.3327121
摘要
Multi-access edge computing (MEC) has emerged as a promising solution that can enable low-end terminal devices to run large complex applications by offloading their tasks to edge servers. The task offloading strategy, determining how to offload tasks, remains the most critical issue of MEC. Traditional offloading approaches either suffer from high computational complexity or poor self-adjustability to dynamic changes in the edge environment. Deep reinforcement learning (DRL) provides an effective way to tackle these issues. However, most existing DRL-based methods solely consider either a continuous or a discrete action space, where the limited action space results in accuracy loss and restricts the optimality of offloading decisions. Nevertheless, the edge task offloading problem in practice often confronts both discrete and continuous actions. In this paper, we propose a tailored Proximal Policy Optimization (PPO)-based method, named Hybrid-PPO, enhanced by the parameterized discrete-continuous hybrid action space. Assisted with Hybrid-PPO, we further design a novel DRL-based multi-server multi-task collaborative partial task offloading scheme adhering to a series of specifically built formal models. Experimental results prove that our approach achieves high offloading efficiency and outperforms the existing state-of-the-art offloading schemes in terms of convergence rate, energy cost, time cost, and generalizability under various network conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI