戒指(化学)
等变映射
克
翻译(生物学)
数学
计算机科学
纯数学
化学
地质学
细菌
古生物学
生物化学
有机化学
信使核糖核酸
基因
作者
Sha Lu,Xuecheng Xu,Yuxuan Wu,Haojian Lu,Xieyuanli Chen,Rong Xiong,Yue Wang
出处
期刊:Cornell University - arXiv
日期:2024-08-30
标识
DOI:10.48550/arxiv.2409.00206
摘要
Global localization using onboard perception sensors, such as cameras and LiDARs, is crucial in autonomous driving and robotics applications when GPS signals are unreliable. Most approaches achieve global localization by sequential place recognition (PR) and pose estimation (PE). Some methods train separate models for each task, while others employ a single model with dual heads, trained jointly with separate task-specific losses. However, the accuracy of localization heavily depends on the success of place recognition, which often fails in scenarios with significant changes in viewpoint or environmental appearance. Consequently, this renders the final pose estimation of localization ineffective. To address this, we introduce a new paradigm, PR-by-PE localization, which bypasses the need for separate place recognition by directly deriving it from pose estimation. We propose RING#, an end-to-end PR-by-PE localization network that operates in the bird's-eye-view (BEV) space, compatible with both vision and LiDAR sensors. RING# incorporates a novel design that learns two equivariant representations from BEV features, enabling globally convergent and computationally efficient pose estimation. Comprehensive experiments on the NCLT and Oxford datasets show that RING# outperforms state-of-the-art methods in both vision and LiDAR modalities, validating the effectiveness of the proposed approach. The code will be publicly released.
科研通智能强力驱动
Strongly Powered by AbleSci AI