计算机科学
稳健性(进化)
人工智能
点云
判别式
目标检测
计算机视觉
激光雷达
级联
情态动词
传感器融合
融合
图像融合
模式识别(心理学)
图像(数学)
遥感
工程类
生物化学
化学
语言学
哲学
化学工程
高分子化学
基因
地质学
作者
Zhe Liu,Tengteng Huang,Bingling Li,Xiwu Chen,Xi Wang,Xiang Bai
标识
DOI:10.1109/tpami.2022.3228806
摘要
Recently, fusing the LiDAR point cloud and camera image to improve the performance and robustness of 3D object detection has received more and more attention, as these two modalities naturally possess strong complementarity. In this paper, we propose EPNet++ for multi-modal 3D object detection by introducing a novel Cascade Bi-directional Fusion (CB-Fusion) module and a Multi-Modal Consistency (MC) loss. More concretely, the proposed CB-Fusion module enhances point features with plentiful semantic information absorbed from the image features in a cascade bi-directional interaction fusion manner, leading to more powerful and discriminative feature representations. The MC loss explicitly guarantees the consistency between predicted scores from two modalities to obtain more comprehensive and reliable confidence scores. The experimental results on the KITTI, JRDB and SUN-RGBD datasets demonstrate the superiority of EPNet++ over the state-of-the-art methods. Besides, we emphasize a critical but easily overlooked problem, which is to explore the performance and robustness of a 3D detector in a sparser scene. Extensive experiments present that EPNet++ outperforms the existing SOTA methods with remarkable margins in highly sparse point cloud cases, which might be an available direction to reduce the expensive cost of LiDAR sensors.
科研通智能强力驱动
Strongly Powered by AbleSci AI