计算机科学
人工智能
计算机视觉
雷达
点云
稳健性(进化)
激光雷达
传感器融合
雷达成像
杠杆(统计)
偏移量(计算机科学)
雷达工程细节
遥感
地理
电信
生物化学
化学
基因
程序设计语言
作者
Zedong Yu,Weibing Wan,Maiyu Ren,Xiuyuan Zheng,Zhijun Fang
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-11-10
卷期号:9 (1): 1524-1536
被引量:2
标识
DOI:10.1109/tiv.2023.3331972
摘要
In the context of autonomous driving environment perception, multi-modal fusion plays a pivotal role in enhancing robustness, completeness, and accuracy, thereby extending the performance boundary of the perception system. However, directly applying LiDAR-related algorithms to radar and camera fusion leads to significant challenges, such as radar sparsity, absence of height information, and noise, resulting in substantial performance loss. To address these issues, our proposed method, SparseFusion3D, utilizes a dual-branch feature-level fusion network that fully models sensor interactions, effectively mitigating the adverse effects of radar sparsity and noise on modality association. Additionally, to enhance modal correlations and accuracy while alleviating radar point cloud sparsity and measurement ambiguity, we introduce MSPCP, which compensates for point cloud offset. Moreover, we integrate Radar Painter to leverage image information and further enhance MSPCP. SparseFusion3D exhibits competitive performance compared to previous radar-camera fusion models, achieving approximately 1.5x inference speedup with similar performance to dense query methods, while also improving by 20.1% compared to the baseline approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI