激光雷达
计算机科学
人工智能
计算机视觉
点云
稳健性(进化)
目标检测
像素
情态动词
行人检测
遥感
模式识别(心理学)
地理
行人
生物化学
化学
考古
高分子化学
基因
作者
Yingwei Li,Adams Wei Yu,Tianjian Meng,Ben Caine,Jiquan Ngiam,Daiyi Peng,Junyang Shen,Bo Wu,Yifeng Lu,Denny Zhou,Quoc V. Le,Alan Yuille,Mingxing Tan
标识
DOI:10.1109/cvpr52688.2022.01667
摘要
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving. While prevalent multi-modal methods [34], [36] simply decorate raw lidar point clouds with camera features and feed them directly to existing 3D detection models, our study shows that fusing camera features with deep lidar features instead of raw points, can lead to better performance. However, as those features are often augmented and aggregated, a key challenge in fusion is how to effectively align the transformed features from two modalities. In this paper, we propose two novel techniques: InverseAug that inverses geometric-related augmentations, e.g., rotation, to enable accurate geometric alignment between lidar points and image pixels, and LearnableAlign that leverages cross-attention to dynamically capture the correlations between image and lidar features during fusion. Based on InverseAug and LearnableAlign, we develop a family of generic multi-modal 3D detection models named DeepFusion, which is more accurate than previous methods. For example, DeepFusion improves Point-Pillars, CenterPoint, and 3D-MAN baselines on Pedestrian detection for 6.7,8.9, and 6.2 LEVEL_2 APH, respectively. Notably, our models achieve state-of-the-art performance on Waymo Open Dataset, and show strong model robustness against input corruptions and out-of-distribution data. Code will be publicly available at https://github.com/tensorflow/lingvo.
科研通智能强力驱动
Strongly Powered by AbleSci AI