Joash Lee,Yanyu Cheng,Dusit Niyato,Yong Liang Guan,David González G.
出处
期刊:IEEE Transactions on Vehicular Technology [Institute of Electrical and Electronics Engineers] 日期:2022-10-01卷期号:71 (10): 11120-11135被引量:10
标识
DOI:10.1109/tvt.2022.3187377
摘要
Autonomous vehicles produce high data rates of sensory information from sensing systems. To achieve the advantages of sensor fusion among different vehicles in a cooperative driving scenario, high data-rate communication becomes essential. Current strategies for joint radar-communication (JRC) often rely on specialized hardware, prior knowledge of the system model, and entail diminished capability in either radar or communication functions. In this paper, we propose a framework for intelligent vehicles to conduct JRC, with minimal prior knowledge of the system model and a tunable performance balance, in an environment where surrounding vehicles execute radar detection periodically, which is typical in contemporary protocols. We introduce a metric on the usefulness of data to help an intelligent vehicle decide what, and to whom, data should be transmitted. The problem framework is cast as a generalized form of the Markov Decision Process (MDP). We identify deep reinforcement learning algorithms (DRL) and algorithmic extensions suitable for solving our JRC problem. For multi-agent scenarios, we introduce a Graph Neural Network (GNN) framework via a control channel. This framework enables modular and fair comparisons of various algorithmic extensions. Our experiments show that DRL results in superior performance compared to non-learning algorithms. Learning of inter-agent coordination in the GNN framework, based only on the Markov task reward, further improves performance.