人工智能
计算机科学
特征提取
特征(语言学)
同时定位和映射
卷积神经网络
模式识别(心理学)
计算机视觉
任务(项目管理)
深度学习
移动机器人
机器人
工程类
语言学
哲学
系统工程
作者
Guangqiang Li,Lei Yu,Shumin Fei
出处
期刊:Measurement
[Elsevier BV]
日期:2021-01-01
卷期号:168: 108403-108403
被引量:28
标识
DOI:10.1016/j.measurement.2020.108403
摘要
Simultaneous Localization and Mapping (SLAM) is the basis for intelligent mobile robots to work in unknown environments. However, traditional feature extraction algorithms that traditional visual SLAM systems rely on have difficulty dealing with texture-less regions and other complex scenes, which limits the development of visual SLAM. The studies of feature points extraction adopting deep learning show that this method has more advantages than traditional methods in dealing with complex scenes, but these studies consider accuracy while ignoring the efficiency. To solve these problems, this paper proposes a deep-learning real-time visual SLAM system based on multi-task feature extraction network and self-supervised feature points. By designing a simplified Convolutional Neural Network (CNN) for detecting feature points and descriptors to replace the traditional feature extractor, the accuracy and stability of the visual SLAM system are enhanced. The experimental results in a dataset and real environments show that the proposed system can maintain high accuracy in a variety of challenging scenes, run on a GPU in real-time, and support the construction of dense 3D maps. Moreover, its overall performance is better than the current traditional visual SLAM system.
科研通智能强力驱动
Strongly Powered by AbleSci AI