计算机科学
融合
人工智能
计算机视觉
目标检测
传感器融合
模式识别(心理学)
语言学
哲学
作者
Jiangfeng Bi,Haiyue Wei,Guoxin Zhang,Kuihe Yang,Ziying Song
出处
期刊:IEEE Latin America Transactions
[Institute of Electrical and Electronics Engineers]
日期:2024-01-23
卷期号:22 (2): 106-112
被引量:6
标识
DOI:10.1109/tla.2024.10412035
摘要
In the realm of autonomous driving, LiDAR and camera sensors play an indispensable role, furnishing pivotal observational data for the critical task of precise 3D object detection. Existing fusion algorithms effectively utilize the complementary data from both sensors. However, these methods typically concatenate the raw point cloud data and pixel-level image features, unfortunately, a process that introduces errors and results in the loss of critical information embedded in each modality. To mitigate the problem of lost feature information, this paper proposes a Cross-Attention Dynamic Fusion (CADF) strategy that dynamically fuses the two heterogeneous data sources. In addition, we acknowledge the issue of insufficient data augmentation for these two diverse modalities. To combat this, we propose a Synchronous Data Augmentation (SDA) strategy designed to enhance training efficiency. We have tested our method using the KITTI and nuScenes datasets, and the results have been promising. Remarkably, our top-performing model attained an 82.52% mAP on the KITTI test benchmark, outperforming other state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI