同时定位和映射
计算机科学
激光雷达
计算机视觉
人工智能
惯性测量装置
背景(考古学)
测距
可视化
钥匙(锁)
机器人
遥感
移动机器人
地理
计算机安全
电信
考古
作者
Jun Cheng,Liyan Zhang,Qihong Chen,Xinrong Hu,Jingcao Cai
标识
DOI:10.1016/j.engappai.2022.104992
摘要
Autonomous driving vehicles require both a precise localization and mapping solution in different driving environment. In this context, Simultaneous Localization and Mapping (SLAM) technology is a well-study settlement. Light Detection and Ranging (LIDAR) and camera sensors are commonly used for localization and perception. However, through ten or twenty years of evolution, the LIDAR-SLAM method does not seem to have changed much. Compared with the LIDAR based schemes, the visual SLAM has a strong scene recognition ability with the advantages of low cost and easy installation. Indeed, people are trying to replace LIDAR sensors with camera only, or integrating other sensors on the basis of camera in the field of autonomous driving. Based on the current research situation of visual SLAM, this review covers the visual SLAM technologies. In particular, we firstly illustrated the typical structure of visual SLAM. Secondly, the state-of-the-art studies of visual and visual-based (i.e. visual-inertial, visual-LIDAR, visual-LIDAR-IMU) SLAM are completely reviewed, as well the positioning accuracy of our previous work are compared with the well-known frameworks on the public datasets. Finally, the key issues and the future development trend of visual SLAM technologies for autonomous driving vehicles applications are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI