Ting Wang,Yun Su,Shiliang Shao,Chen Yao,Zhidong Wang
标识
DOI:10.1109/iros51168.2021.9636232
摘要
This paper presents a tightly coupled pipeline, which efficiently fuses measurements of LiDAR, camera, IMU, encoder, and GNSS to estimate the robot state and build a map even in challenging situations. The depth of visual features is extracted by projecting the LiDAR point cloud and ground plane into image. We select the tracked high-quality visual features and LiDAR features and tightly coupled the pre-integrated values of the IMU and the encoder to optimize the state increment of a robot. We use the estimated relative pose to re-evaluate the matching distance between features in the local window and remove dynamic objects and outliers. In the mapping node, we use refined features and tightly coupled the GNSS measurements, increment factors, and local ground constraints to further refine the robot’s global state by aligning LiDAR features with the global map. Furthermore, the method can detect sensor degradation and automatically reconfigure the optimization process. Based on a six-wheeled ground robot, we perform extensive experiments in both indoor and outdoor environments and demonstrated that the proposed GR-Fusion outperforms state-of-the-art SLAM methods in terms of accuracy and robustness.