人工智能
视觉里程计
计算机视觉
里程计
激光雷达
计算机科学
单眼
姿势
稳健性(进化)
特征(语言学)
遥感
移动机器人
地理
机器人
生物化学
化学
语言学
哲学
基因
作者
Baofu Fang,Qing Pan,Hao Wang
标识
DOI:10.1109/wrcsara60131.2023.10261804
摘要
Lidar-assisted visual odometry is a widely used method for pose estimation. However, existing lidar-visual odometry methods suffer from depth association errors, and single-point feature-based methods have insufficient accuracy and are prone to tracking failures, leading to inaccurate pose estimation. In this paper, we proposed a direct monocular visual odometry method based on lidar visual fusion. Firstly, high-gradient pixels of the lidar point cloud projection are extracted, and an initial pose estimation is performed by minimizing photometric errors, avoiding inaccurate feature-depth associations. Then, the point-line features in keyframes are combined and associated with the current frame for pose refinement, and the line features are also matched using the minimization of photometric errors. The evaluations on the KITTI Odometry and Nuscene datasets, and compared to lidar odometry and similar lidar-assisted visual odometry methods, show that our method achieves better pose estimation accuracy and robustness in the majority of scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI