多光谱图像
高光谱成像
计算机科学
人工智能
图像融合
遥感
融合
多光谱模式识别
传感器融合
模式识别(心理学)
计算机视觉
图像(数学)
地质学
语言学
哲学
作者
Xuheng Cao,Yusheng Lian,Kaixuan Wang,Chao Ma,Xianqing Xu
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:62: 1-15
被引量:7
标识
DOI:10.1109/tgrs.2024.3359232
摘要
Fusing a low spatial resolution hyperspectral image with a high spatial resolution multispectral image has become popular for generating a high spatial resolution hyperspectral image (HR-HSI). Most methods assume that the degradation information from high resolution to low resolution is known in spatial and spectral domains. Conversely, this information is often limited or unavailable in practice, restricting their performance. Furthermore, existing fusion methods still face the problem of insufficient exploration of the cross-interaction between the spatial and spectral domains in the HR-HSI, leaving scope for further improvement. This paper proposes an unsupervised Hybrid Network of Transformer and CNN (uHNTC) for blind HSI-MSI fusion. The uHNTC comprises three subnetworks: a transformer-based feature fusion subnetwork (FeafusFomer) and two CNN-based degradation subnetworks (SpaDNet and SpeDNet). Considering the strong multi-level spatio-spectral correlation between the desired HR-HSI and the observed images, we design a Multi-level Cross-feature Attention (MCA) mechanism in FeafusFormer. By incorporating the hierarchical spatio-spectral feature fusion into the attention mechanism in the transformer, the MCA globally keeps a high spatio-spectral cross-similarity between the recovered HR-HSI and observed images, thereby ensuring the high cross-interaction of the recovered HR-HSI. Subsequently, the characteristics of degradation information are utilized to guide the design of the SpaDNet and SpeDNet, which helps FeafusFormer accurately recover the desired HR-HSI in complex real-world environments. Through an unsupervised joint training of the three subnetworks, uHNTC recovers the desired HR-HSI without pre-known degradation information. Experimental results on three public datasets and a WorldView-2 images show that the uHNTC outperforms ten state-of-the-art fusion methods. Code available: https://github.com/Caoxuheng/HIFtool.
科研通智能强力驱动
Strongly Powered by AbleSci AI