人工智能
计算机视觉
同时定位和映射
计算机科学
RGB颜色模型
分割
弹道
修补
光流
特征(语言学)
目标检测
移动机器人
图像(数学)
机器人
语言学
物理
哲学
天文
作者
Wanfang Xie,Peter Liu,Minhua Zheng
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:70: 1-8
被引量:29
标识
DOI:10.1109/tim.2020.3026803
摘要
Localization accuracy is a fundamental requirement for Simultaneous Localization and Mapping (SLAM) systems. Traditional visual SLAM (vSLAM) schemes are usually based upon the assumption of static environments, so they do not perform well in dynamic environments. While a number of vSLAM frameworks have been reported for dynamic environments, the localization accuracy is usually unsatisfactory. In this article, we present a novel motion detection and segmentation method using Red Green Blue-Depth (RGB-D) data to improve the localization accuracy of feature-based RGB-D SLAM in dynamic environments. To overcome the problem due to undersegmentation generated by the semantic segmentation network, a mask inpainting method is developed to ensure the completeness of object segmentation. In the meantime, an optical flow-based motion detection method is proposed to detect dynamic objects from moving cameras, allowing robust detection by removing irrelevant information. Experiments performed on the public Technical University of Munich (TUM) RGB-D data set show that the presented scheme outperforms the state-of-art RGB-D SLAM systems in terms of trajectory accuracy, improving the localization accuracy of RGB-D SLAM in dynamic environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI