计算机科学
同时定位和映射
特征提取
Orb(光学)
人工智能
特征(语言学)
卷积(计算机科学)
视觉里程计
匹配(统计)
计算机视觉
模式识别(心理学)
图像(数学)
人工神经网络
机器人
移动机器人
数学
语言学
统计
哲学
标识
DOI:10.1109/iccasit55263.2022.9987187
摘要
In the frontend visual odometry of SLAM, the traditional feature matching method has poor extraction effect and instability in the case of changes in viewpoint and illumination, while the feature matching method based on deep learning cannot meet the real-time requirements on embedded devices with low computing power. To solve the above problems, this paper improves the SuperPoint network using depthwise separable convolution and designs a lightweight feature extraction network named L_SuperPoint. Based on the L_SuperPoint network, this paper designs a visual SLAM system, and the system not only has a better mapping effect than ORB-SLAM2 but also can run in real-time on embedded devices. The results of dataset simulation experiments and real scene experiments show that the L_SuperPoint network has both the robust feature extraction of the SuperPoint network and the real-time mapping capability of ORB-SLAM2, which effectively improves the operating efficiency and accuracy of the SLAM.
科研通智能强力驱动
Strongly Powered by AbleSci AI