同时定位和映射
人工智能
计算机视觉
计算机科学
渲染(计算机图形)
分割
Orb(光学)
机器人学
弹道
机器人
移动机器人
图像(数学)
物理
天文
作者
Chenyu Ruan,Qiuyu Zang,Kehua Zhang,Kai Huang
标识
DOI:10.1109/jsen.2023.3345877
摘要
Vision simultaneous localization and mapping (SLAM) is essential for adapting to new environments and for localization and is therefore widely used in robotics. However, accurate location estimation and map consistency remain challenging issues in dynamic environments. In addition, building dense scene maps is critical for spatial artificial intelligence (AI) applications such as visual localization and navigation. We propose a visual SLAM with ORB features and NeRF mapping in dynamic environments (DN-SLAM), a visual SLAM system based on oriented FAST and rotated BRIEF (ORB)-SLAM3, which uses ORB features to track dynamic objects, uses semantic segmentation to obtain potentially moving objects, and combines optical flow and the segment anything model (SAM) to perform fine segmentation and reduce the redundancy by culling dynamic objects to enhance the performance of the SLAM system in dynamic environments. Meanwhile, 3-D rendering using neural radiation field removes dynamic objects and renders them. We performed experiments on both the Technical University of Munich (TUM) red, green, blue (RGB)-D dataset and the Bonn dataset, and we compared our results with the advanced dynamic SLAM algorithms available. Our findings reveal that, when compared to ORB-SLAM3, DN-SLAM significantly improves trajectory accuracy in highly dynamic environments and achieves more accurate localization than other advanced dynamic SLAM methods and successful 3-D reconstruction of static scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI