同时定位和映射
人工智能
计算机视觉
计算机科学
移动机器人
机器人
机器人学
作者
Cheng-Zheng Sun,Zhang Bo,Jikai Wang,Chengshu Zhang
标识
DOI:10.1109/icaie53562.2021.00055
摘要
Simultaneous Localization and Mapping (SLAM) consists of the immediate construction of the environment and the state estimation of the robot in it, while Visual SLAM (VSLAM) is the use of cameras and other visual sensors for SLAM. VSLAM has become an important part of mobile robots, drones, unmanned vehicles and other unmanned systems in unknown environments to achieve full-scale navigation and environmental perception. First, the principle of architecture, the mathematical models, the current research status and the algorithms of each part have been reviewed. Then, the research hotspots and current facing challenges on VSLAM were summarized from three parts: (i) VSLAM and deep learning; (ii) data processing of multi-sensor; (iii) VSLAM in visual/inertial navigation. Moreover, the research trend of VSLAM were further analyzed, including (i) deep learning and deep estimation, (ii) active and multi-robot VSLAM and (iii) semantic VSLAM. At last, the future development of VSLAM was discussed, which may provide a certain guiding significance for researchers in this area.
科研通智能强力驱动
Strongly Powered by AbleSci AI