视觉里程计
人工智能
里程计
计算机视觉
单眼
计算机科学
激光雷达
束流调整
姿势
水准点(测量)
直线(几何图形)
点(几何)
单目视觉
机器人
遥感
数学
移动机器人
地理
图像(数学)
几何学
大地测量学
作者
Shi-Sheng Huang,Zeyu Ma,Tai‐Jiang Mu,Hongbo Fu,Shi‐Min Hu
标识
DOI:10.1109/icra40945.2020.9196613
摘要
We introduce a novel lidar-monocular visual odometry approach using point and line features. Compared to previous point-only based lidar-visual odometry, our approach leverages more environment structure information by introducing both point and line features into pose estimation. We provide a robust method for point and line depth extraction, and formulate the extracted depth as prior factors for point-line bundle adjustment. This method greatly reduces the features' 3D ambiguity and thus improves the pose estimation accuracy. Besides, we also provide a purely visual motion tracking method and a novel scale correction scheme, leading to an efficient lidar-monocular visual odometry system with high accuracy. The evaluations on the public KITTI odometry benchmark show that our technique achieves more accurate pose estimation than the state-of-the-art approaches, and is sometimes even better than those leveraging semantic information.
科研通智能强力驱动
Strongly Powered by AbleSci AI