激光雷达
人工智能
计算机科学
计算机视觉
点云
惯性测量装置
遥感
地理
作者
Diantao Tu,Hainan Cui,Shuhan Shen
出处
期刊:Isprs Journal of Photogrammetry and Remote Sensing
日期:2023-11-16
卷期号:206: 149-167
被引量:4
标识
DOI:10.1016/j.isprsjprs.2023.11.012
摘要
Cameras and LiDARs are currently two types of sensors commonly used for 3D mapping. Vision-based methods are susceptible to textureless regions and lighting, and LiDAR-based methods easily degenerate in scenes with insignificant structural features. Most current fusion-based methods require strict synchronization between the camera and LiDAR and need auxiliary sensors, such as IMU. All of these lead to an increase in device cost and complexity. To address that, in this paper, we propose a low-cost mapping pipeline called PanoVLM that only requires a panoramic camera and a LiDAR without strict synchronization. First, camera poses are estimated by a LiDAR-assisted global Structure-from-Motion, and LiDAR poses are derived with the initial camera-LiDAR relative pose. Then, line-to-line and point-to-plane associations are established between LiDAR point clouds, which are used to further refine LiDAR poses and remove motion distortion. With the initial sensor poses, line-to-line correspondences are established between images and LiDARs to refine their poses jointly. The final step, joint panoramic Multi-View Stereo, estimates the depth map for each panoramic image and fuses them into a complete dense 3D map. Experimental results show that PanoVLM can work on various scenarios and outperforms state-of-the-art (SOTA) vision-based and LiDAR-based methods. Compared with the current SOTA LiDAR-based techniques, namely LOAM, LeGO-LOAM, and F-LOAM, PanoVLM manifests a reduction in the absolute rotation error and absolute translation error by 20% and 35%, respectively. Our code and dataset are available at https://github.com/3dv-casia/PanoVLM.
科研通智能强力驱动
Strongly Powered by AbleSci AI