人工智能
计算机视觉
计算机科学
稳健性(进化)
情态动词
融合
雷达
编码器
图像融合
特征提取
目标检测
模式识别(心理学)
图像(数学)
电信
生物化学
化学
语言学
哲学
高分子化学
基因
操作系统
作者
Taohua Zhou,Junjie Chen,Yining Shi,Kun Jiang,Mengmeng Yang,Diange Yang
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-01-27
卷期号:8 (2): 1523-1535
被引量:62
标识
DOI:10.1109/tiv.2023.3240287
摘要
Environmental perception with the multi-modal fusion is crucial in autonomous driving to increase accuracy, completeness, and robustness. This paper focuses on utilizing millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection. A novel method that realizes the feature-level fusion under the bird's-eye view (BEV) for a better feature representation is proposed. Firstly, radar points are augmented with temporal accumulation and sent to a spatial-temporal encoder for radar feature extraction. Meanwhile, multi-scale image 2D features which adapt to various spatial scales are obtained by image backbone and neck model. Then, image features are transformed to BEV with the designed view transformer. In addition, this work fuses the multi-modal features with a two-stage fusion model called point-fusion and ROI-fusion, respectively. Finally, a detection head regresses objects category and 3D locations. Experimental results demonstrate that the proposed method realizes the state-of-the-art (SOTA) performance under the most crucial detection metrics–mean average precision (mAP) and nuScenes detection score (NDS) on the challenging nuScenes dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI