计算机科学
计算卸载
强化学习
移动边缘计算
分布式计算
边缘计算
计算
任务(项目管理)
GSM演进的增强数据速率
趋同(经济学)
能源消耗
物联网
理论(学习稳定性)
人工智能
嵌入式系统
机器学习
算法
管理
经济
生物
经济增长
生态学
作者
Han Hu,Dingguo Wu,Fuhui Zhou,Shi Jin,Rose Qingyang Hu
标识
DOI:10.1109/globecom46510.2021.9685906
摘要
Mobile edge computing (MEC) has recently emerged as an enabling technology to support computation-intensive and delay-critical applications for energy-constrained and computation-limited Internet of Things (IoT). Due to the time-varying channels and dynamic task patterns, there exist many challenges to make efficient and effective computation offloading decisions, especially in the multi-server multi-user IoT networks, where the decisions involve both continuous and discrete actions. In this paper, we investigate computation task offloading in a dynamic environment and formulate a task offloading problem to minimize the average long-term service cost in terms of power consumption and buffering delay. To enhance the estimation of the long-term cost, we propose a deep reinforcement learning based algorithm, where deep deterministic policy gradient (DDPG) and dueling double deep Q networks (D3QN) are invoked to tackle continuous and discrete action domains, respectively. Simulation results validate that the proposed DDPG-D3QN algorithm exhibits better stability and faster convergence than the existing methods, and the average system service cost is decreased obviously.
科研通智能强力驱动
Strongly Powered by AbleSci AI