Yumeng Li,Yunhe Zhang,Tong Guo,Yu Liu,Yisheng Lv,Wenbo Du
出处
期刊:IEEE transactions on intelligent vehicles [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:: 1-13
标识
DOI:10.1109/tiv.2024.3364652
摘要
The escalating density of airspace has led to sharply increased conflicts between aircraft. Efficient and scalable conflict resolution methods are crucial to mitigate collision risks. Existing learning-based methods become less effective as the scale of aircraft increases due to their redundant information representations. In this paper, to accommodate the increased airspace density, a novel graph reinforcement learning (GRL) method is presented to efficiently learn deconfliction strategies. A time-evolving conflict graph is exploited to represent the local state of individual aircraft and the global spatiotemporal relationships between them. Equipped with the conflict graph, GRL can efficiently learn deconfliction strategies by selectively aggregating aircraft state information through a multi-head attention-boosted graph neural network. Furthermore, a temporal regularization mechanism is proposed to enhance learning stability in highly dynamic environments. Comprehensive experimental studies have been conducted on an OpenAI Gym-based flight simulator. Compared with the existing state-of-the-art learning-based methods, the results demonstrate that GRL can save much training time while achieving significantly better deconfliction strategies in terms of safety and efficiency metrics. In addition, GRL has a strong power of scalability and robustness with increasing aircraft scale.