计算机科学
强化学习
服务器
体验质量
能源消耗
GSM演进的增强数据速率
边缘计算
任务(项目管理)
计算机网络
分布式计算
服务质量
人工智能
工程类
电气工程
系统工程
作者
Xiaoming He,Haodong Lu,Yingchi Mao,Kun Wang
标识
DOI:10.1109/globecom42002.2020.9348050
摘要
In the transportation industry, task offloading services of edge intelligent Internet of Vehicles (IoV) are expected to provide vehicles with the better Quality of Experience (QoE). However, the various status of diverse edge servers and vehicles, as well as varying vehicular offloading modes, make a challenge of task offloading service. Therefore, to enhance the satisfaction of QoE, we first introduce a novel QoE model. Specifically, the emerging QoE model restricted by the energy consumption, (1) intelligent vehicles equipped with caching spaces and computing units may work as carriers; (2) various computational and caching capacities of edge servers can empower the offloading; (3) unpredictable routings of the vehicles and edge servers can lead to diverse information transmission. We then propose an improved deep reinforcement learning (DRL) algorithm named RA-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Extensive experiments certify the better performance, i.e., stability and convergence, of our RA-DDPG algorithm compared to existing work. Moreover, the experiments indicate that the QoE value can be improved by the proposed algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI