人工智能
计算机科学
计算机视觉
单眼
同时定位和映射
稳健性(进化)
卷积神经网络
水准点(测量)
深度学习
模棱两可
三维重建
机器人
移动机器人
地理
基因
大地测量学
化学
生物化学
程序设计语言
作者
Xinchen Ye,Ji Xiang,Baoli Sun,Shenglun Chen,Zhihui Wang,Haojie Li
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2020-07-01
卷期号:396: 76-91
被引量:22
标识
DOI:10.1016/j.neucom.2020.02.044
摘要
Monocular visual SLAM methods can accurately track the camera pose and infer the scene structure by building sparse correspondence between two/multiple views of the scene. However, the reconstructed 3D maps of these methods are extremely sparse. On the other hand, deep learning is widely used to predict dense depth maps from single-view color images, but the results are subject to blurry depth boundaries, which severely deform the structure of 3D scene. Therefore, this paper proposes a dense reconstruction method under the monocular SLAM framework (DRM-SLAM), in which a novel scene depth fusion scheme is designed to fully utilize both the sparse depth samples from monocular SLAM and predicted dense depth maps from convolutional neural network (CNN). In the scheme, a CNN architecture is carefully designed for robust depth estimation. Besides, our approach also accounts for the problem of scale ambiguity existing in the monocular SLAM. Extensive experiments on benchmark datasets and our captured dataset demonstrate the accuracy and robustness of the proposed DRM-SLAM. The evaluation of runtime and adaptability under challenging environments also verify the practicability of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI