计算机科学
目标检测
模态(人机交互)
任务(项目管理)
人工智能
感知
领域(数学)
传感器融合
对象(语法)
代表(政治)
计算机视觉
激光雷达
钥匙(锁)
人机交互
模式识别(心理学)
系统工程
工程类
地理
心理学
计算机安全
遥感
数学
神经科学
政治
法学
政治学
纯数学
作者
Yingjuan Tang,Hongwen He,Yong Wang,Zan Mao,Haoyu Wang
出处
期刊:Neurocomputing
[Elsevier]
日期:2023-07-22
卷期号:553: 126587-126587
被引量:15
标识
DOI:10.1016/j.neucom.2023.126587
摘要
Autonomous driving perception has made significant strides in recent years, but accurately sensing the environment using a single sensor remains a daunting task. This review offers a comprehensive overview of the current research on LiDAR and camera fusion for 3D object detection in multi-modality domains. The review first identifies the perception task, open public detection dataset, and data representation related to 3D object detection. It then presents an in-depth survey of coarse-grained and fine-grained fusion approaches, reporting their respective performances on the KITTI and nuScenes datasets. The review identifies general trends in multi-modality 3D object detection and provides insights and promising research directions based on these observations. Additionally, the review summarizes the current challenges of fusion strategies for perception problems in autonomous driving. Based on a critical review of existing literature, this paper identifies and discusses key research directions in the field of fusion-based 3D object detection approach for perception problems in autonomous driving, which is instructive for future work.
科研通智能强力驱动
Strongly Powered by AbleSci AI