人工智能
计算机视觉
同时定位和映射
计算机科学
直线(几何图形)
点(几何)
点云
数学
移动机器人
机器人
几何学
作者
Jinjin Yan,Y. Zheng,Jinquan Yang,Lyudmila Mihaylova,Weijie Yuan,Fuqiang Gu
摘要
Abstract Simultaneous localization and mapping (SLAM) is required in many areas and especially visual‐based SLAM (VSLAM) due to the low cost and strong scene recognition capabilities conventional VSLAM relies primarily on features of scenarios, such as point features, which can make mapping challenging in scenarios with sparse texture. For instance, in environments with limited (low‐even non‐) textures, such as certain indoors, conventional VSLAM may fail due to a lack of sufficient features. To address this issue, this paper proposes a VSLAM system called visual SLAM that can adaptively fuse point‐line‐plane features (PLPF‐VSLAM). As the name implies, it can adaptively employ different fusion strategies on the PLPF for tracking and mapping. In particular, in rich‐textured scenes, it utilizes point features, while in non‐/low‐textured scenarios, it automatically selects the fusion of point, line, and/or plane features. PLPF‐VSLAM is evaluated on two RGB‐D benchmarks, namely the TUM data sets and the ICL_NUIM data sets. The results demonstrate the superiority of PLPF‐VSLAM compared to other commonly used VSLAM systems. When compared to ORB‐SLAM2, PLPFVSLAM achieves an improvement in accuracy of approximately 11.29%. The processing speed of PLPF‐VSLAM outperforms PL(P)‐VSLAM by approximately 21.57%.
科研通智能强力驱动
Strongly Powered by AbleSci AI