强化学习
计算机科学
异步通信
任务(项目管理)
移动边缘计算
分布式计算
基站
移动设备
GSM演进的增强数据速率
国家(计算机科学)
计算机网络
人工智能
算法
工程类
系统工程
操作系统
作者
Ziqi Lin,Bo Gu,Xu Zhang,Difei Yi,Yu Han
标识
DOI:10.1109/wcnc51071.2022.9771739
摘要
Multi-access edge computing (MEC) and ultra-dense networking (UDN) are recognized as two promising paradigms for future mobile networks that can be utilized to improve the spectrum efficiency and the quality of computational experience (QoCE). In this paper, we study the task offloading problem in an MEC-enabled UDN architecture with the aim to minimize the task duration while satisfying the energy budget constraints. Due to the dynamics associated with the environment and parameter uncertainty, designing an optimal task offloading algorithm is highly challenging. Consequently, we propose an online task offloading algorithm based on a state-of-the-art deep reinforcement learning (DRL) technique: asynchronous advantage actor-critic (A3C). It is worthy of remark that the proposed method requires neither instantaneous channel state information (CSI) nor prior knowledge of the computational capabilities of the base stations. Simulations show that the our method is able to learn a good offloading policy to obtain a near-optimal task allocation while meeting energy budget constraints of mobile devices in UDN environment.
科研通智能强力驱动
Strongly Powered by AbleSci AI