计算机科学
强化学习
调度(生产过程)
分布式计算
基站
聚类分析
边缘设备
计算机网络
数学优化
人工智能
云计算
数学
操作系统
作者
Vishwas N Udupa,Vamsi Krishna Tumuluru
标识
DOI:10.1109/aisc56616.2023.10085158
摘要
Multi-access edge computing (MEC) and Ultra-dense networks (UDN) are a special case of 5G cellular networks where the density of base stations is higher compared to that of the end users (UE). Hence, a UE is likely to be present in the coverage of multiple base stations at any given time instant. This paper deals with providing scheduling algorithm for multiaccess edge computing in UDN. Unlike existing works, where the transmission scheduling (i.e., assigning the base stations for each client) and the computation resource scheduling are jointly considered. Due to the uncertainties in the task generation and path losses, we model the scheduling problem as deep reinforcement learning (DRL) problem which maximizes the total utility of the clients. The DRL model (based on actor-critic neural networks) is trained using the deep deterministic policy gradient (DDPG) algorithm. The results show the convergence of the total utility and better performance compared to a greedy policy and a priority based scheduling policy.
科研通智能强力驱动
Strongly Powered by AbleSci AI