计算机科学
强化学习
计算卸载
计算
边缘设备
加入
带宽(计算)
分布式计算
GSM演进的增强数据速率
服务器
边缘计算
人工智能
计算机网络
云计算
操作系统
算法
程序设计语言
作者
Ning Chen,Sheng Zhang,Zhuzhong Qian,Jie Wu,Sanglu Lu
标识
DOI:10.1109/icpads47876.2019.00066
摘要
Computation offloading makes sense to the interaction between users and compute-intensive applications. Current researches focused on deciding locally or remotely executing an application, but ignored the specific offloading proportion of application. A full offloading cannot make the best use of client and server resources. In this paper, we propose an innovative reinforcement learning (RL) method to solve the proportional computation problem. We consider a common offloading scenario with time-variant bandwidth and heterogeneous devices, and the device generates applications constantly. For each application, the client has to choose locally or remotely executing this application, and determines the proportion to be offloaded. We formalize the problem as a long-term optimization problem, and then propose a RL-based algorithm to solve it. The basic idea is to estimate the benefit of posible decisions, of wihch the decision with the maximum benefit is selected. Instead of adopting the original Deep Q Network (DQN), we propose Advanced DQN (ADQN) by adding Priority Buffer Mechanism and Expert Buffer Mechanism, which improves the utilization of samples and overcomes the cold start problem, respectively. The experimental results show ADQN's high feasibility and efficiency compared with several traditional policies, such as None Offloading Policy, Random Offloading Policy, Link Capacity Optimal Policy, and Computing Capability Optimal Policy. At last, we analyse the effect of expert buffer size and learning rate on ADQN's performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI