里程计
人工智能
计算机科学
计算机视觉
机器人
视觉里程计
同时定位和映射
光学(聚焦)
机器人学
移动机器人
物理
光学
作者
Ali Samadzadeh,Ahmad Nickabadi
出处
期刊:IEEE Transactions on Robotics
[Institute of Electrical and Electronics Engineers]
日期:2023-05-11
卷期号:39 (4): 2878-2891
被引量:5
标识
DOI:10.1109/tro.2023.3268591
摘要
There has been extensive research on visual localization and odometry for autonomous robots and virtual reality during the past decades. Traditionally, this problem has been solved with the help of expensive sensors, such as light detection and ranging (LiDAR). Nowadays, the focus of the leading research in this field is on robust localization using more economic sensors, such as cameras and inertial measurement units. Consequently, geometric visual localization methods have become more accurate over time. However, these methods still suffer from significant loss and divergence in challenging environments, such as a room full of moving people. Scientists started using deep neural networks (DNNs) to mitigate this problem. The main idea behind using DNNs is to better understand challenging aspects of the data and overcome complex conditions such as the movement of a dynamic object in front of the camera that covers the full view of the camera, extreme lighting conditions, and the high speed of the camera. Prior end-to-end DNN methods did overcome some of these challenges. However, no general and robust framework is available to overcome all challenges together. In this article, we have combined geometric and DNN-based methods to have the generality and speed of geometric SLAM frameworks and overcome most of these challenging conditions with the help of DNNs and deliver the most robust framework so far. To do so, we have designed a framework based on VINS-Mono and shown that it can achieve state-of-the-art results on TUM-Dynamic, TUM-VI, ADVIO, and EuRoC datasets compared to geometric and end-to-end DNN-based simultaneous localization and mappings. Our proposed framework can also achieve outstanding results on extreme simulated cases resembling the aforementioned challenges.
科研通智能强力驱动
Strongly Powered by AbleSci AI