强化学习
计算机科学
云朵
计算
调度(生产过程)
分布式计算
线程(计算)
边缘计算
计算卸载
人工智能
GSM演进的增强数据速率
云计算
算法
数学优化
数学
操作系统
作者
Xuefang Nie,Yunhui Yan,Tianqing Zhou,Xingbang Chen,Zhang Dingding
出处
期刊:Electronics
[MDPI AG]
日期:2023-03-31
卷期号:12 (7): 1655-1655
被引量:3
标识
DOI:10.3390/electronics12071655
摘要
Cloudlet-based vehicular networks are a promising paradigm to enhance computation services through a distributed computation method, where the vehicle edge computing (VEC) cloudlet are deployed in the vicinity of the vehicle. In order to further improve the computing efficiency and reduce the task processing delay, we present a parallel task scheduling strategy based on the multi-agent deep reinforcement learning (DRL) approach for delay-optimal VEC in vehicular networks, where multiple computation tasks select the target threads in a VEC server to execute the computing tasks. We model the target thread decision of computation tasks as a multi-agent reinforcement learning problem, which is further solved by using a task scheduling algorithm based on multi-agent DRL that is implemented in a distributed manner. The computation tasks, with each selection acting on the target thread acting as an agent, collectively interact with the VEC environment and receive observations with respect to a common reward and learn to reduce the task processing delay by updating the multi-agent deep Q network (MADQN) using the obtained experiences. The experimental results show that the proposed DRL-based scheduling algorithm can achieve significant performance improvement, reducing the task processing delay by 40% and increasing the processing probability of success for computation tasks by more than 30% compared with the traditional task scheduling algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI