人工智能
计算机科学
合成孔径雷达
模式识别(心理学)
匹配(统计)
相似性(几何)
保险丝(电气)
计算机视觉
方向(向量空间)
卷积神经网络
图像配准
图像(数学)
数学
物理
统计
几何学
量子力学
作者
Yang Liu,Hua Qi,Shiyong Peng
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:20: 1-5
被引量:2
标识
DOI:10.1109/lgrs.2023.3298687
摘要
High-precision image matching techniques are required to fully utilize complementary information from optical and synthetic aperture radar (SAR) images. However, there are notable nonlinear radiometric differences (NRD) between optical and SAR images because of the various imaging techniques used by the various sensors. The existing template matching method based on the Siamese structure underutilizes the phase structure information, which is less susceptible to NRD, resulting in subpar matching precision. To address this problem, this letter proposes an optical and SAR image matching method based on phase structure convolutional features that use the log-Gabor filter to extract the multi-orientation phase structure information of the image. It constructs a multi-scale fusion SiamUNet-7 (MSF SiamUNet-7) network to extract the phase structure convolutional features to fully fuse the local texture information at a large scale and the global structure information at a small scale. The phase structure convolutional features of the optical and SAR images are used to generate the image pair similarity map using the mutual correlation layer, and the peak position in the similarity map is regarded as the best matching result. Experiments showed that, on the cropped Tiny-SEN1-2 dataset, the correct matching rate (CMR) and mean matching error (mME) of the threshold T ≤ 4 of the proposed method were 92.24% and 1.348, respectively, which improved the CMR( T ≤ 4) by 4.51% and reduced the mME by 0.046 compared with the original SiamUNet-9 model. The proposed method can effectively overcome the large NRD between the optical and SAR images and achieve high-precision matching.
科研通智能强力驱动
Strongly Powered by AbleSci AI