高光谱成像
多光谱图像
计算机科学
人工智能
遥感
对偶(语法数字)
图像融合
计算机视觉
图形
对偶图
模式识别(心理学)
图像(数学)
地质学
艺术
文学类
理论计算机科学
折线图
作者
Kai Zhang,Jun Yan,Feng Zhang,Chiru Ge,Wenbo Wan,Jiande Sun,Huaxiang Zhang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:62: 1-18
被引量:3
标识
DOI:10.1109/tgrs.2024.3365719
摘要
Recently, deep neural network (DNN)-based methods have achieved good results in terms of the fusion of low spatial resolution hyperspectral (LR HS) and high spatial resolution multispectral (HR MS) images. However, the spectral band correlation (SBC) and the spatial nonlocal similarity (SNS) in hyperspectral (HS) images are not sufficiently exploited by them. To model the two priors efficiently, we propose a spectral-spatial dual graph unfolding network (SDGU-Net), which is derived from the optimization of graph regularized restoration models. Specifically, we introduce spectral and spatial graphs to regularize the reconstruction of the desired high spatial resolution hyperspectral (HR HS) image. To explore the SBC and SNS priors of HS images in feature space and utilize the powerful learning ability of DNNs simultaneously, the iterative optimization of the spectral and spatial graph regularized models is unfolded as a network, which is composed of spectral and spatial graph unfolding modules. The two kinds of modules are designed according to the solutions of the spectral and spatial graph regularized models. In these modules, we employ graph convolution networks (GCNs) to capture the SBC and SNS in the fused image. Then, the learned features are integrated by the corresponding feature fusion modules and fed into the feature condense module to generate the HR HS image. We conduct extensive experiments on three benchmark datasets and the results demonstrate the effectiveness of our proposed SDGU-Net.
科研通智能强力驱动
Strongly Powered by AbleSci AI