同时定位和映射
人工智能
计算机视觉
计算机科学
直线(几何图形)
单眼
光流
特征提取
匹配(统计)
特征(语言学)
点(几何)
数学
图像(数学)
机器人
统计
哲学
语言学
移动机器人
几何学
作者
Lei Xu,Hesheng Yin,Tong Shi,D. M. JIANG,Bo Huang
出处
期刊:IEEE robotics and automation letters
日期:2022-12-26
卷期号:8 (2): 752-759
被引量:24
标识
DOI:10.1109/lra.2022.3231983
摘要
This letter introduces an efficient visual-inertial simultaneous localization and mapping (SLAM) method using point and line features. Currently, point-based SLAM methods do not perform well in scenarios such as weak textures and motion blur. Many researchers have noticed the excellent properties of line features in space and have attempted to develop line-based SLAM systems. However, the vast computational effort of the line extraction and description matching process makes it challenging to guarantee the real-time performance of the whole SLAM system, and the incorrect line detection and matching limit the performance improvement of the SLAM system. In this letter, we improve the traditional line detection model by means of short-line fusion, line feature uniform distribution, and adaptive threshold extraction to obtain high-quality line features for constructing SLAM constraints. Based on the gray level invariance assumption and colinear constraint, we propose a line optical flow tracking method, which significantly improves the speed of line feature matching. In addition, a measurement model that is independent of line endpoints is presented for estimating line residuals. The experimental results show that our algorithm improves the efficiency of line feature detection and matching and localization accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI