激光雷达
同时定位和映射
计算机视觉
人工智能
视觉里程计
计算机科学
里程计
遥感
地理
机器人
移动机器人
作者
Chih‐Chung Chou,Cheng‐Fu Chou
标识
DOI:10.1109/tits.2021.3130089
摘要
We investigate a novel way to integrate visual SLAM and lidar SLAM. Instead of enhancing visual odometry via lidar depths or using visual odometry as the motion initial guess of lidar odometry, we propose tightly-coupled visual-lidar SLAM (TVL-SLAM), in which the visual and lidar frontend are run independently and which incorporates all of the visual and lidar measurements in the backend optimizations. To achieve large-scale bundle adjustments in TVL-SLAM, we focus on accurate and efficient lidar residual compression. The visual-lidar SLAM system implemented in this work is based on the open-source ORB-SLAM2 and a lidar SLAM method with average performance, whereas the resulting visual-lidar SLAM clearly outperforms existing visual/lidar SLAM approaches, achieving 0.52% error on KITTI training sequences and 0.56% error on testing sequences.
科研通智能强力驱动
Strongly Powered by AbleSci AI