计算机科学
人工智能
点云
计算机视觉
分割
激光雷达
目标检测
联营
传感器融合
任务(项目管理)
代表(政治)
水准点(测量)
遥感
地质学
地理
法学
管理
经济
大地测量学
政治
政治学
作者
Zhijian Liu,Haotian Tang,Alexander Amini,Xinyu Yang,Huizi Mao,Daniela Rus,Song Han
标识
DOI:10.1109/icra48891.2023.10160968
摘要
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we propose BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift the key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than $\mathbf{40}\times$ . BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving 1.3% higher mAP and NDS on 3D object detection and 13.6% higher mIoU on BEV map segmentation, with 1.9× lower computation cost. Code to reproduce our results is available at https://github.com/mit-han-lab/bevfusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI