惯性测量装置
激光雷达
同时定位和映射
计算机科学
遥感
人工智能
计算机视觉
环境科学
地理
移动机器人
机器人
作者
Chuanwei Zhang,R. P. Zhao,Peilin Qin,Jiajia Yang
标识
DOI:10.1088/1402-4896/ad7a30
摘要
Abstract Accurate and robust Simultaneous Localization and Mapping (SLAM) technology is a critical component of driverless cars, and semantic information plays a vital role in their analysis and understanding of the scene. In the actual scene, the moving object will produce the shadow phenomenon in the mapping process. The positioning accuracy and mapping effect will be affected. Therefore, we propose a semantic SLAM framework combining LiDAR, IMU, and camera, which includes a semantic fusion front-end odometry module and a closed-loop back-end optimization module based on semantic information. An improved image semantic segmentation algorithm based on Deeplabv3+ is designed to enhance the performance of the image semantic segmentation model by replacing the backbone network and introducing an attention mechanism to ensure the accuracy of point cloud segmentation. Dynamic objects are detected and eliminated by calculating the similarity score of semantic labels. A loop closure detection method based on semantic information is proposed to detect key semantic features and use threshold range detection and point cloud re-matching to establish the correct loop closure detection, and finally reduce the global cumulative error and improve the global trajectory accuracy using graph optimization to ultimately obtain the global motion trajectory and realize the construction of 3D semantic maps. We evaluated it on the KITTI dataset and collected a dataset for evaluation by ourselves, which includes four different sequences. The results show that the proposed framework has good positioning accuracy and mapping effect in large-scale urban road environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI