Zhihuang Zhang,Jintao Zhao,Changyao Huang,Liang Li
出处
期刊:IEEE transactions on intelligent vehicles [Institute of Electrical and Electronics Engineers] 日期:2023-01-01卷期号:8 (1): 358-367被引量:30
标识
DOI:10.1109/tiv.2022.3173662
摘要
Precise localization is essential but also challenging for autonomous vehicles. In this article, A novel visual localization method is proposed. Specifically, a semantic local map describing the local environment is built with images sequence and wheel-inertial ego-motion results. Then the local semantic map is matched with the online map database for camera position estimation. The key novelty of the method lies in using a supervised neural network to simplify the map-matching problem, which avoids the complex data association and optimization processes. The network encodes the maps, infers the feature similarity, and predicts the camera position. The visual localization results are then loosely integrated with other onboard sensors by an invariant Kalman filter. We evaluate the map-matching module and the overall fusion system on scenario tests. The experiment results validate the effectiveness of the learning-based map-matching method. And the accuracy of the overall system is satisfactory, with mean absolute errors of 0.039 m and 0.167 m in the lateral and longitudinal directions, respectively.