鉴别器
合成孔径雷达
人工智能
匹配(统计)
特征(语言学)
计算机科学
计算机视觉
特征提取
像素
图像(数学)
模式识别(心理学)
数学
哲学
统计
探测器
电信
语言学
作者
Yuanxin Ye,Chao Yang,Guoqing Gong,Peizhen Yang,Dou Quan,Jiayuan Li
标识
DOI:10.1109/tgrs.2024.3366247
摘要
Due to the complementary nature of optical and SAR images, their alignment is of increasing interest. However, due to the significant radiometric differences between them, precise matching becomes a very challenging problem. Although current advanced structural features and deep learning-based methods have proposed feasible solutions, there is still much potential for improvement. In this paper, we propose a hybrid matching method using attention-enhanced structural features (namely AESF), which combines the advantages of both handcrafted-based and learning-based methods to improve the accuracy of optical and SAR image matching. It mainly consists of two modules: a novel effective multi-branch global attention (MBGA) module and a joint multi-cropping image matching loss function (MCTM) module. The MBGA module is designed to focus on shared information in structural feature descriptors of heterogeneous images across space and channel dimensions, significantly improving the expressive capacity of the classical structural features and generating more refined and robust image features. The MCTM module is constructed to fully exploit the association between global and local information of the input image, which can optimize the triple loss discriminator to discriminate positive and negative samples. To validate the effectiveness of the proposed method, it is compared with five state-of-the-art matching methods by using various optical and SAR datasets. The experimental results show that the matching accuracy at the 1-pixel threshold is improved by about 1.8%-8.7% compared with the most advanced deep learning method (OSMNet) and 6.5%-23% compared with the handcrafted description method (CFOG).
科研通智能强力驱动
Strongly Powered by AbleSci AI