同时定位和映射
计算机视觉
人工智能
激光雷达
计算机科学
里程计
视觉里程计
传感器融合
融合
单眼
机器人
移动机器人
遥感
地理
语言学
哲学
标识
DOI:10.1145/3573428.3573575
摘要
SLAM (simultaneous localization and mapping) is based on the assumption of a static environment, and the external sensors equipped with SLAM have their own advantages and disadvantages. Therefore, poor lighting conditions, lack of geometric features and dynamic objects in the scene will interfere with the positioning accuracy of the SLAM algorithm based on a single sensor. To solve this problem, this paper proposes a dynamic SLAM algorithm based on Lidar-vision fusion. The algorithm provides depth information for image features through the fusion of monocular image sequences and Lidar scans. After rough positioning with fusion data, dynamic objects in the scenes are eliminated to further optimize the positioning accuracy. The comparative experimental results evaluated on 11 sequences in the KITTI odometry datasets demonstrate that the localization accuracy of the proposed algorithm is better than the vision-based ORB-SLAM2 and DynaSLAM algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI