计算机科学
网格
传感器融合
占用率
占用网格映射
感知
控制(管理)
航程(航空)
人工智能
实时计算
机器学习
工程类
地理
建筑工程
大地测量学
神经科学
航空航天工程
机器人
生物
移动机器人
作者
Cheng Chang,Jiawei Zhang,Xiaoying Zhang,Wenqin Zhong,Xinyu Peng,Shen Li,Zhiheng Li
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-11-01
卷期号:8 (11): 4498-4514
被引量:1
标识
DOI:10.1109/tiv.2023.3293954
摘要
Birds-Eye-View (BEV) perception can naturally represent natural scenes, which is conducive to multimodal data processing and fusion. BEV data contain rich semantics and integrate the information of driving scenes, which play an important role in researches related to autonomous driving. However, BEV constructed by single vehicle perception encounter certain issues, such as low accuracy and insufficient range, and thus cannot be well applied to scenario understanding and driving situation prediction. To address the challenges, this paper proposes a novel data-driven approach based on vehicle-to-everything (V2X) communication. The roadside unit or cloud center collects local BEV data from all connected and automated vehicles (CAVs) within the control area, then fuses and predicts the future global BEV occupancy grid map. It provides powerful support for driving safety warning, cooperative driving planning, cooperative traffic control and other applications. More precisely, we develop an attention-based cooperative BEV fusion and prediction model called BEV-V2X. We also compare the performance of BEV-V2X with that of single vehicle prediction. Experimental results demonstrate that our proposed method achieves higher accuracy. Even in cases where not all vehicles are CAVs, the model can still comprehensively estimate and predict global spatiotemporal changes. We also discuss the impact of the CAV rate, single vehicle perception ability, and grid size on the fusion and prediction results.
科研通智能强力驱动
Strongly Powered by AbleSci AI