计算机科学
马尔可夫决策过程
分布式计算
移动边缘计算
边缘计算
强化学习
计算卸载
能源消耗
调度(生产过程)
边缘设备
延迟(音频)
云计算
蜂窝网络
云朵
服务器
计算机网络
移动设备
马尔可夫过程
GSM演进的增强数据速率
人工智能
数学优化
统计
生态学
数学
电信
生物
操作系统
作者
Tong Liu,Yameng Zhang,Yanmin Zhu,Weiqin Tong,Yuanyuan Yang
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2021-04-15
卷期号:8 (8): 6649-6664
被引量:40
标识
DOI:10.1109/jiot.2021.3051427
摘要
With the explosion of mobile smart devices, many computation intensive applications have emerged, such as interactive gaming and augmented reality. Mobile-edge computing (EC) is put forward, as an extension of cloud computing, to meet the low-latency requirements of the applications. In this article, we consider an EC system built in an ultradense network with numerous base stations. Heterogeneous computation tasks are successively generated on a smart device moving in the network. An optimal task offloading strategy, as well as optimal CPU frequency and transmit power scheduling, is desired by the device user to minimize both task completion latency and energy consumption in a long term. However, due to the stochastic task generation and dynamic network conditions, the problem is particularly difficult to solve. Inspired by reinforcement learning, we transform the problem into a Markov decision process. Then, we propose an attention-based double deep Q network (DDQN) approach, in which two neural networks are employed to estimate the cumulative latency and energy rewards achieved by each action. Moreover, a context-aware attention mechanism is designed to adaptively assign different weights to the values of each action. We also conduct extensive simulations to compare the performance of our proposed approach with several heuristic and DDQN-based baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI