同时定位和映射
人工智能
计算机视觉
单眼
计算机科学
RGB颜色模型
束流调整
卷积神经网络
特征(语言学)
单目视觉
特征提取
图像(数学)
机器人
移动机器人
语言学
哲学
作者
Yifan Jin,Lei Yu,Zhong Chen,Shumin Fei
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2022-02-01
卷期号:22 (3): 2447-2455
被引量:6
标识
DOI:10.1109/jsen.2021.3134014
摘要
Currently, SLAM (simultaneous localization and mapping) systems based on monocular cameras cannot directly obtain depth information, and most of them have problems with scale uncertainty and need to be initialized. In some application scenarios that require navigation and obstacle avoidance, the inability to achieve dense mapping is also a defect of monocular SLAM. In response to the above problems, this paper proposes a method which learns depth estimation by DenseNet and CNN for a monocular SLAM system. We use an encoder-decoder architecture based on transfer learning and convolutional neural networks to estimate the depth information of monocular RGB images. At the same time, through the front-end ORB feature extraction and the back-end direct RGB-D Bundle Adjustment optimization method, it is possible to obtain accurate camera poses and achieve dense indoor mapping when using estimated depth information. The experimental results show that the monocular depth estimation model used in this paper can achieve good results, and it is also competitive in comparison with the current popular methods. On this basis, the error of camera pose estimation is also smaller than traditional monocular SLAM solutions and can complete the dense indoor reconstruction task. It is a complete SLAM system based on monocular camera.
科研通智能强力驱动
Strongly Powered by AbleSci AI