同时定位和映射
计算机科学
计算机视觉
线程(计算)
人工智能
机器人
可视化
RGB颜色模型
稳健性(进化)
移动机器人
生物化学
基因
操作系统
化学
作者
Shuhong Cheng,Changhe Sun,Shijun Zhang,Dianfan Zhang
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:72: 1-12
被引量:16
标识
DOI:10.1109/tim.2022.3228006
摘要
Simultaneous localization and mapping (SLAM) is one of the fundamental capabilities for intelligent mobile robots to perform state estimation in unknown environments. However, most visual SLAM systems rely on the static scene assumption and consequently have severely reduced accuracy and robustness in dynamic scenes. Moreover, the metric maps constructed by many systems lack semantic information, so the robots cannot understand their surroundings at a human cognitive level. In this article, we propose SG-SLAM, which is a real-time RGB-D semantic visual SLAM system based on the ORB-SLAM2 framework. First, SG-SLAM adds two new parallel threads: an object detecting thread to obtain 2-D semantic information and a semantic mapping thread. Then, a fast dynamic feature rejection algorithm combining semantic and geometric information is added to the tracking thread. Finally, they are published to the robot operating system (ROS) system for visualization after generating 3-D point clouds and 3-D semantic objects in the semantic mapping thread. We performed an experimental evaluation on the TUM dataset, the Bonn dataset, and the OpenLORIS-Scene dataset. The results show that SG-SLAM is not only one of the most real-time, accurate, and robust systems in dynamic scenes but also allows the creation of intuitive semantic metric maps.
科研通智能强力驱动
Strongly Powered by AbleSci AI