强化学习
能源消耗
计算机科学
趋同(经济学)
最优化问题
过程(计算)
能量(信号处理)
人工智能
分布式计算
数学优化
工程类
电气工程
统计
数学
算法
经济
经济增长
操作系统
作者
Ming Yan,Litong Zhang,Wei Jiang,Chien Aun Chan,André F. Gygax,Ampalavanapillai Nirmalathas
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-03-07
卷期号:24 (8): 13629-13639
被引量:5
标识
DOI:10.1109/jsen.2024.3370924
摘要
Unmanned aerial vehicle (UAV)-assisted multiaccess edge computing (MEC) technology has garnered significant attention and has been successfully implemented in specific scenarios. The optimization of the network energy consumption in the relevant scenarios is essential for the whole system performance due to the constrained energy capacity of UAVs. However, the dynamic changes in MEC network resources make energy consumption optimization a challenge. In this article, a multi-UAV-multiuser MEC model is established to assess the system energy consumption, and the optimization problem of multi-UAV cooperation strategies is formulated based on the model. Then, a multiagent deep deterministic policy gradient (MADDPG) algorithm based on deep reinforcement learning (DRL) is employed to resolve the above optimization problem. Each UAV is equivalent to a single agent that cooperates with other agents to train actors and critic evaluation networks to accomplish collaborative decision-making. In addition, a prioritized experience replay (PER) scheme is used to improve the convergence of the training process. Simulation results show the impact of changes in different network resources on the network energy consumption by comparing the performance of different algorithms. The findings presented in this article serve as a valuable reference for future work on system performance optimization, specifically in terms of energy efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI