惯性参考系
同时定位和映射
计算机视觉
人工智能
计算机科学
材料科学
物理
机器人
经典力学
移动机器人
作者
Tianbing Ma,Yanan Li,Fei Du,J. Shu,Changpeng Li
标识
DOI:10.1088/1361-6501/ad9627
摘要
Abstract In low-light environments, the scarcity of visual information makes feature extraction and matching challenging for traditional visual Simultaneous Localization and Mapping (SLAM) systems. Changes in ambient lighting can also reduce the accuracy and recall of loop closure detection. Most existing image enhancement methods tend to introduce noise, artifacts, and color distortions when enhancing images. To address these issues, we propose an innovative low-light visual-inertial SLAM system, named LL-VI SLAM, which integrates an image enhancement network into the front end of the SLAM system. This system consists of a learning-based low-light enhancement network and an improved visual-inertial odometry. Our low-light enhancement network, composed of a Retinex-based enhancer and a U-Net-based denoiser, enhances image brightness while mitigating the adverse effects of noise and artifacts. Additionally, we incorporate a robust Inertial Measurement Unit initialization process at the front end of the system to accurately estimate gyroscope biases and improve rotational estimation accuracy. Experimental results demonstrate that LL-VI SLAM outperforms existing methods on three datasets, namely LOLv1, ETH3D, and TUM VI, as well as in real-world scenarios. Our approach achieves a Peak Signal-to-Noise Ratio of 22.08 dB. Moreover, on the TUM VI dataset, our system reduces localization error by 33.3\% compared to ORB-SLAM3, proving the accuracy and robustness of the proposed method in low-light environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI