强化学习
可扩展性
变压器
计算机科学
钢筋
工程类
人工智能
电气工程
电压
结构工程
数据库
作者
Dezhi Chen,Qi Qi,Qianlong Fu,Jingyu Wang,Jianxin Liao,Zhu Han
出处
期刊:IEEE Transactions on Intelligent Transportation Systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-16
标识
DOI:10.1109/tits.2024.3358010
摘要
Compared with terrestrial networks, unmanned aerial vehicles (UAVs) have the characteristics of flexible deployment and strong adaptability, which are an important supplement to intelligent transportation systems (ITS). In this paper, we focus on the multi-UAV network area coverage problem (ACP) which require intelligent UAVs long-term trajectory decisions in the complex and scalable network environment. Multi-agent deep reinforcement learning (DRL) has recently emerged as an effective tool for solving long-term decisions problems. However, since the input dimension of multi-layer perceptron (MLP)-based deep neural network (DNN) is fixed, it is difficult for standard DNN to adapt to a variable number of UAVs and network users. Therefore, we combine Transformer with DRL to meet the scalability of the network and propose a Transformer-based deep multi-agent reinforcement learning (T-MARL) algorithm. Transformer can adapt to variable input dimensions and extract important information from complex network states by attention module. In our research, we find that random initialization of Transformer may cause DRL training failure, so we propose a baseline-assisted pre-training scheme. This scheme can quickly provide an initial policy model for UAVs based on imitation learning, and use the temporal-difference(1) algorithm to initialize policy evaluation network. Finally, based on parameter sharing, T-MARL is applicable to any standard DRL algorithm and supports expansion on networks of different sizes. Experimental results show that T-MARL can make UAVs have cooperative behaviors and perform outstandingly on ACP.
科研通智能强力驱动
Strongly Powered by AbleSci AI