作者
Yu Xia,Hongwei Wu,Liucun Zhu,Weiwei Qi,Shushu Zhang,Junwu Zhu
摘要
In the dynamic landscape of artificial intelligence and robotics, the pursuit of accurate positioning in mobile robots has intensified. This research addresses the limitations of single-sensor SLAM (Simultaneous Localization and Mapping) techniques in complex settings by harnessing the collective strengths of LiDAR (Light Detection And Ranging), Camera, IMU (Inertial Measurement Unit), and GNSS (Global Navigation Satellite System) sensors. The proposed multi-sensor tightly-coupled SLAM framework is an integration of point-line feature-based laser-visual-inertial odometry, visual-laser fusion loop closure detection, and factor graph-based back-end optimization. Within the visual-inertial subsystem, an advanced LSD (Line Segment Detector) feature extraction strategy is introduced, incorporating point-line fusion to enhance visual line features. Additionally, the laser point cloud is projected onto the camera coordinate system, establishing depth associations with visual attributes. Strengthening the robustness of the visual-inertial subsystem in low-texture environments, camera poses undergo optimization through a sliding-window bundle adjustment method. In the laser-inertial subsystem, IMU preintegration mitigates laser point cloud distortion. Extracting edge and plane features, coupled with frame-to-local-map matching, enhances matching efficiency while streamlining computational intricacies. This amalgamation forms the basis of the laser-visual-inertial odometry fusion system. To overcome the limitations of standalone visual and laser-based loop closure detection, a dual-loop closure method utilizing visual-laser fusion is proposed. Leveraging the DBoW2 bag-of-words model, complemented by temporal-spatial consistency checks, enhances detection efficiency and accuracy. The integration of GNSS factors imparts global constraints for expansive outdoor scenarios. Employing factor graph-based back-end optimization, the refinement of laser-visual-inertial odometry factors, visual-inertial odometry factors, IMU preintegration factors, loop closure factors, and GNSS factors culminates in precise global pose estimation and high-fidelity point cloud maps. Through rigorous evaluation of the M2DGR dataset and a mobile robot platform, the proposed methodology emerges as an exemplar of performance, showcasing superiority over the state-of-the-art LIO-SAM technique. Achieving a reduction of 2.86 m and 3.23 m in the root mean square error of absolute pose estimation across divergent environments, this approach exhibits remarkable efficacy in outdoor scenarios, thereby elevating the precision and resilience of SLAM algorithms for mobile robots.