截头台
计算机视觉
人工智能
计算机科学
激光雷达
RGB颜色模型
阶段(地层学)
计算机图形学(图像)
对象(语法)
目标检测
遥感
模式识别(心理学)
地理
工程类
地质学
古生物学
机械工程
作者
Anshul Paigwar,David Sierra-Gonzalez,Özgür Erkent,Christian Laugier
标识
DOI:10.1109/iccvw54120.2021.00327
摘要
Accurate 3D object detection is a key part of the perception module for autonomous vehicles. A better understanding of the objects in 3D facilitates better decision-making and path planning. RGB Cameras and LiDAR are the most commonly used sensors in autonomous vehicles for environment perception. Many approaches have shown promising results for 2D detection with RGB Images, but efficiently localizing small objects like pedestrians in the 3D point cloud of large scenes has remained a challenging area of research. We propose a novel method, Frustum-PointPillars, for 3D object detection using LiDAR data. Instead of solely relying on point cloud features, we leverage the mature field of 2D object detection to reduce the search space in the 3D space. Then, we use the Pillar Feature Encoding network for object localization in the reduced point cloud. We also propose a novel approach for masking point clouds to further improve the localization of objects. We train our network on the KITTI dataset and perform experiments to show the effectiveness of our network. On the KITTI test set our method outperforms other multi-sensor SOTA approaches for 3D pedestrian localization (Bird's Eye View) while achieving a significantly faster runtime of 14 Hz.
科研通智能强力驱动
Strongly Powered by AbleSci AI