高光谱成像
计算机科学
图像分辨率
人工智能
模式识别(心理学)
全光谱成像
冗余(工程)
遥感
空间分析
计算机视觉
卷积神经网络
光谱带
卷积(计算机科学)
人工神经网络
地理
操作系统
作者
Shi Chen,Lefei Zhang,Liangpei Zhang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:61: 1-14
被引量:28
标识
DOI:10.1109/tgrs.2023.3315970
摘要
Deep learning-based hyperspectral image super-resolution (SR) methods have achieved remarkable success, which can improve the spatial resolution of hyperspectral images with abundant spectral information. However, most of them utilize 2D or 3D convolutions to extract local features while ignoring the rich global spatial-spectral information. In this paper, we propose a novel method called the Multi-Scale Deformable Transformer (MSDformer) for single hyperspectral image super-resolution (SR). The proposed method incorporates the strengths of the convolutional neural network for local spatial-spectral information and the Transformer structure for global spatial-spectral information. Specifically, a multi-scale spectral attention module based on dilated convolution is designed to extract local multi-scale spatial-spectral information, which leverages shared module parameters to exploit the intrinsic spatial redundancy and spectral attention mechanism to accentuate the subtle differences between different spectral groups. Then a deformable convolution-based Transformer module is proposed to further extract the global spatial-spectral information from the local multi-scale features of the previous stage, which can explore the diverse long-range dependencies among all spectral bands. Extensive experiments on three hyperspectral datasets demonstrate that the proposed method achieves excellent SR performance and outperforms the state-of-the-art methods in terms of quantitative quality and visual results. The code is available at https://github.com/Tomchenshi/MSDformer.git.
科研通智能强力驱动
Strongly Powered by AbleSci AI