同时定位和映射
人工智能
视觉里程计
计算机视觉
计算机科学
里程计
特征(语言学)
特征提取
转化(遗传学)
Orb(光学)
职位(财务)
匹配(统计)
机器人
特征匹配
接头(建筑物)
移动机器人
模式识别(心理学)
数学
图像(数学)
工程类
语言学
哲学
生物化学
化学
统计
财务
经济
基因
建筑工程
作者
Qingxi Zeng,Bangjun Ou,Rongchen Wang,Haonan Yu,Jingjie Yu,Yixuan Hu
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2023-04-15
卷期号:23 (8): 8789-8796
标识
DOI:10.1109/jsen.2023.3253570
摘要
Robust positioning is a central issue in robots. Due to the complex indoor environment, visual simultaneous localization and mapping (VSLAM) is susceptible to light and some scenes contain few features. The robot cannot obtain an accurate position. To solve these problems, this work first proposes dynamic adaptive threshold simultaneous localization and mapping (DAT-SLAM), a method for extracting feature points based on a dynamic adaptive threshold. The method is stable for feature extraction under illumination transformation. Subsequently, template matching visual odometry (VO) is introduced and combined with DAT-SLAM to form a joint indoor localization framework. Experiments on the datasets show that DAT-SLAM has a better performance than oriented fast and rotated brief SLAM2 (ORB-SLAM2). The mean positioning accuracy is improved by 11.74%. Experiments in real scenes show that the joint localization framework can achieve continuous localization in scenes with sparse features.
科研通智能强力驱动
Strongly Powered by AbleSci AI