视觉里程计
里程计
稳健性(进化)
人工智能
计算机科学
直接法
单眼
计算机视觉
计算
模式识别(心理学)
算法
移动机器人
机器人
生物化学
化学
物理
核磁共振
基因
标识
DOI:10.1109/icccr54399.2022.9790157
摘要
Visual odometry can be divided into direct and indirect methods. Indirect methods which have more computation complexities are more robust, while direct methods are faster but less robust. To achieve the goal in terms of efficiency and robustness, we propose a novel semi-direct visual odometry approach with depth prior, which can be applied to most monocular direct odometry approaches. The proposed method is implemented based Direct Sparse Odometry. Experimental results on TUM datasets show that the proposed approach, which combines the advantages of both the direct and indirect methods, are more efficient and robust than the state-of-the-art method.
科研通智能强力驱动
Strongly Powered by AbleSci AI