计算机科学
卷积神经网络
人工智能
模式识别(心理学)
图像(数学)
深度学习
计算机视觉
作者
Zixu Li,Genji Yuan,Jinjiang Li
标识
DOI:10.1016/j.eswa.2024.123589
摘要
The goal of pansharpening methods is to complement the spectral and spatial information contained in Multi-spectral (MS) and panchromatic (PAN) images to obtain the desired High-resolution multispectral (HRMS) image. The existing majority of pansharpening methods either extract feature information separately from the MS image and PAN image, or extract feature information after concatenating the MS image and PAN image. However, the entire extraction process lacks the utilization of complementary information and tends to generate redundant information, thereby leading to the loss of certain important information during the extraction process, which in turn affects the overall performance. In order to better utilize the complementary information between the MS image and PAN image and enhance the interpretability of the network, we propose the Deep Unfolding Convolutional-Dictionary Network (DUCD) for pansharpening in this paper. This network fully integrates complementary information between the MS image and PAN image to generate the final fused image. The entire network structure consists of two parts: The encoder and the decoder. In the encoder part of the network, we clarify the common and unique feature information between MS and PAN images by constructing an observation model. Simultaneously, we use the approximate gradient algorithm to continuously optimize the model and iteratively unfold it into a deep network structure. In the decoder part of the network, we concatenate the obtained common and specific information from MS and PAN images and pass them through convolutional and activation layers. Subsequently, they are input into the introduced Frequency Domain-based Transformer (FDT) module and an information-lossless inversible neural network(INN). This provides a more efficient method for establishing long-range dependency relationships between feature extraction and feature fusion. To demonstrate the effectiveness of our proposed method, we conduct extensive experiments on three benchmark datasets QB, GF2 and WV3. Experimental results show that our method outperforms the current SOTA Pansharpening methods in terms of performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI