高光谱成像
计算机科学
先验概率
卷积神经网络
人工智能
深度学习
变压器
正规化(语言学)
超分辨率
模式识别(心理学)
源代码
图像(数学)
操作系统
电压
贝叶斯概率
物理
量子力学
作者
Qing Ma,Junjun Jiang,Xianming Liu,Jiayi Ma
标识
DOI:10.1016/j.inffus.2023.101907
摘要
To address the ill-posed problem of hyperspectral image super-resolution (HSISR), a commonly employed technique is to design a regularization term based on the prior information of hyperspectral images (HSIs) to effectively constrain the objective function. Traditional model-based methods that rely on manually crafted priors are insufficient in fully characterizing the properties of HSIs. Learning-based methods usually use a convolutional neural network (CNN) to learn the implicit priors of HSIs. However, the learning ability of CNN is limited, it only considers the spatial characteristics of the HSIs and ignores the spectral characteristics, and convolution is not effective for long-range dependency modeling. There is still a lot of room for improvement. In this paper, we propose a novel HSISR method that leverages the Transformer architecture instead of the CNN to learn the prior of HSIs. Specifically, we employ the proximal gradient algorithm to solve the HSISR model and simulate the iterative solution process using an unfolding network. The self-attention layer of the Transformer enables global spatial interaction, while a 3D-CNN is added behind the Transformer layers to better capture the spatio-spectral correlation of HSIs. Both quantitative and visual results on three widely used HSI datasets and the real-world dataset demonstrate that the proposed method achieves a considerable gain compared to all the mainstream algorithms including the most competitive conventional methods and the recently proposed deep learning-based methods. The source code and trained models are made publicly available at https://github.com/qingma2016/3DT-Net.
科研通智能强力驱动
Strongly Powered by AbleSci AI