计算机视觉
人工智能
激光雷达
计算机科学
惯性测量装置
稳健性(进化)
点云
同时定位和映射
机器人
全球导航卫星系统应用
地平面
基本事实
编码器
传感器融合
里程计
移动机器人
遥感
地理
全球定位系统
操作系统
基因
电信
生物化学
化学
天线(收音机)
作者
Ting Wang,Yun Su,Shiliang Shao,Chen Yao,Zhidong Wang
标识
DOI:10.1109/iros51168.2021.9636232
摘要
This paper presents a tightly coupled pipeline, which efficiently fuses measurements of LiDAR, camera, IMU, encoder, and GNSS to estimate the robot state and build a map even in challenging situations. The depth of visual features is extracted by projecting the LiDAR point cloud and ground plane into image. We select the tracked high-quality visual features and LiDAR features and tightly coupled the pre-integrated values of the IMU and the encoder to optimize the state increment of a robot. We use the estimated relative pose to re-evaluate the matching distance between features in the local window and remove dynamic objects and outliers. In the mapping node, we use refined features and tightly coupled the GNSS measurements, increment factors, and local ground constraints to further refine the robot’s global state by aligning LiDAR features with the global map. Furthermore, the method can detect sensor degradation and automatically reconfigure the optimization process. Based on a six-wheeled ground robot, we perform extensive experiments in both indoor and outdoor environments and demonstrated that the proposed GR-Fusion outperforms state-of-the-art SLAM methods in terms of accuracy and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI