Jun Cheng,Liyan Zhang,Qihong Chen,Zhumu Fu,Luyao Du
出处
期刊:IEEE Transactions on Vehicular Technology [Institute of Electrical and Electronics Engineers] 日期:2024-03-20卷期号:73 (8): 11029-11043被引量:2
标识
DOI:10.1109/tvt.2024.3379435
摘要
Simultaneous localization and mapping (SLAM) has been indispensable for autonomous driving vehicles. Since the visual images are vulnerable to light interference and the light detection and ranging (LiDAR) heavily depends on geometric features of the surrounding scene, only relying on a camera or LiDAR show limitations in challenging environment. This paper proposes a Visual-LiDAR-IMU fusion method for high precision and robust vehicle localization. In the front end, the LiDAR point cloud is used to obtain the depth information of visual features with the synchronized IMU measurements are input into the pose estimation module in a loose-coupled manner. In the back end, two critical strategies are proposed to reduce the computation amount of the algorithm. Where the balanced selection strategy is based on keyframe and sliding window algorithms, and the classification optimization strategy is based on feature points and pose estimation assistance. In addition, an improved loop detection algorithm based on Iterative Closest Point (ICP) is proposed to reduce large-scale drift. Experimental results on the real-world scenes show that the average positioning error of our algorithm is 1.10 m, 0.91m, 1.04m in x, y, z-direction, the average rotation error is 1.03deg, 0.81deg, 0.70deg for roll, pitch, yaw, and the average resource utilization rate is 32.04% (CPU) and 13.18% (memory), the average consumption time is 24.87 ms. Compared with ORB-SLAM3, LVIO, LVI-SAM, R3LIVE and Fast-LIVO algorithms, the proposed algorithm has a better performance on both accuracy and robustness with best real-time performance.