激光雷达
计算机科学
传感器融合
情态动词
雷达
目标检测
跟踪(教育)
计算机视觉
人工智能
对象(语法)
极高频率
视频跟踪
遥感
实时计算
数据挖掘
模式识别(心理学)
地理
电信
化学
高分子化学
心理学
教育学
作者
Yao Li,Jiajun Deng,Yu Zhang,Jianmin Ji,Houqiang Li,Yanyong Zhang
出处
期刊:IEEE robotics and automation letters
日期:2022-07-25
卷期号:7 (4): 11182-11189
被引量:20
标识
DOI:10.1109/lra.2022.3193465
摘要
A recent trend is to combine multiple sensors ( i.e. , cameras, LiDARs and millimeter-wave Radars) to achieve robust multi-modal perception for autonomous systems such as self-driving vehicles. Although quite a few sensor fusion algorithms have been proposed, some of which are top-ranked on various leaderboards, a systematic study on how to integrate these three types of sensors to develop effective multi-modal 3D object detection and tracking is still missing. Towards this end, we first study the strengths and weaknesses of each data modality carefully, and then compare several different fusion strategies to maximize their utility. Finally, based upon the lessons learnt, we propose a simple yet effective multi-modal 3D object detection and tracking framework (namely EZFusion). As demonstrated by extensive experiments on the nuScenes dataset, without fancy network modules, our proposed EZFusion makes remarkable improvements over the LiDAR-only baseline, and achieves comparable performance with the state-of-the-art fusion-based methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI