激光雷达
计算机视觉
人工智能
计算机科学
RGB颜色模型
目标检测
稳健性(进化)
模式
传感器融合
探测器
视觉对象识别的认知神经科学
分割
对象(语法)
遥感
地理
电信
基因
社会学
化学
生物化学
社会科学
作者
Eduardo R. Corral-Soto,Bingbing Liu
标识
DOI:10.1109/iv47402.2020.9304558
摘要
In object detection for autonomous driving and robotic applications, conventional RGB cameras often fail to sense objects under extreme illumination conditions and on texture-less surfaces, while LIDAR sensors often fail to sense small or thin objects located far from the sensor. For these reasons, an intuitive and obvious choice for perception system designers is to install multiple sensors of different modalities to increase (in theory) the detection robustness. In this paper we focus on the analysis of an object detector that performs early fusion of RGB images and LIDAR 3D points. Our goal is to go beyond the intuition of simply adding more sensor modalities to improve performance, and instead analyze, quantify, and understand the performance differences, strengths and weaknesses of the object detector under three different modalities: 1) RGB-only, 2) LIDAR-only, and 3) Early fusion (RGB and LIDAR), and under two key scene variables: 1) Distance of objects from the sensor (density), and 2) Illumination (Darkness). We also propose methodologies to generate 2D weak semantic training masks, and a methodology to evaluate the object detection performance separately at different distance ranges, which provides a more reliable detection performance measure and correlates well with object LIDAR point density.
科研通智能强力驱动
Strongly Powered by AbleSci AI