计算机视觉
激光雷达
计算机科学
人工智能
稳健性(进化)
目标检测
雷达
传感器融合
雷达工程细节
雷达成像
图像融合
遥感
模式识别(心理学)
地理
图像(数学)
生物化学
电信
基因
化学
作者
Zhiyi Su,Bo Ming,Wei Hua
标识
DOI:10.1109/sensors56945.2023.10324930
摘要
Object detection plays a pivotal role in achieving reliable and accurate perception for autonomous driving systems, encompassing tasks such as estimating object location, size, category, and other features from sensory inputs. The prevailing sensor modalities in this domain are LiDAR, camera, and radar, with multi-modal fusion being widely acknowledged as a means to optimize object detection outcomes. Among these modalities, camera and radar exhibit complementary characteristics and hold the potential for comprehensive object profile recognition. Furthermore, they are cost-effective compared to LiDAR solutions. However, the orthogonal nature of camera and radar data poses significant challenges for effective fusion. This paper introduces a novel framework that integrates camera and radar inputs to enhance perception robustness. Our approach involves fusing 2D detection proposals derived from camera imagery and radar points, enabling reliable object detection. To support this fusion framework, we present a calibration algorithm and demonstrate its performance through extensive evaluation on real-world dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI