作者
Tao Xie,Kun Dai,Ke Wang,Ruifeng Li,Lina Zhao
摘要
Local feature matching constitutes the cornerstone of multiple computer vision applications (e.g., 3D reconstruction and long-term visual localization), and has been successfully resolved by detector-free methods. To further improve the matching performance, more recent research has focused on designing sophisticated architectures but endures additional computational overhead. In this study, with a different perspective from previous studies, we aim to develop a deep and compact matching network to improve performance while reducing computing cost. The key insight is that a local feature matcher with deep layers can capture more human-intuitive and simpler-to-match features. To this end, we propose DeepMatcher, a deep transformer-based network that tackles the inherent obstacles of not being able to build a deep local feature matcher with current methods. DeepMatcher consists of: (1) a local feature extractor (LFE), (2) a feature-transition module (FTM), (3) a slimming transformer (SlimFormer), (4) a coarse matches module (CMM), and (5) a fine matches module (FMM). The LFE is utilized to generate dense keypoints with enriched features from the images. We then introduce the FTM to ensure a smooth transition of feature scopes from LFE to the subsequent SlimFormer because of their different receptive fields. Subsequently, we develop SlimFormer dedicated to DeepMatcher, which leverages vector-based attention to model the relevance among all keypoints, enabling the network to construct a deep Transformer architecture with less computational cost. Relative position encoding is applied to each SlimFormer to explicitly disclose the relative distance information, thereby improving the representation of the keypoints. A layer-scale strategy is also employed in each SlimFormer to enable the network to adaptively assimilate message exchange, thus endowing it to simulate human behavior, in which humans can acquire different matching cues each time they scan an image pair. By interleaving the self- and cross-SlimFormers multiple times, DeepMatcher can easily establish pixel-wise dense matches at the coarse level using the CMM. Finally, we consider match refinement as a combination of classification and regression problems and design an FMM to predict confidence and offset concurrently, thus generating robust and accurate matches. Compared with our baseline LoFTR in indoor/outdoor pose estimation, DeepMatcher surpasses it by 3.32%/2.91% in AUC@5∘. Besides, DeepMatcher and DeepMatcher-L significantly reduce computational cost and only consume 77.89% and 92.46% GFLOPs of LoFTR. Large DeepMatcher considerably outperforms state-of-the-art methods on several benchmarks, including outdoor pose estimation (MegaDepth dataset), indoor pose estimation (ScanNet dataset), homography estimation (HPatches dataset), and image matching (HPatches dataset), demonstrating the superior matching capability of a deep local feature matcher.