计算机科学
高光谱成像
人工智能
模式识别(心理学)
变压器
卷积神经网络
特征提取
空间分析
遥感
物理
量子力学
电压
地质学
作者
Zhenqiu Shu,Yuyang Wang,Zhengtao Yu
标识
DOI:10.1016/j.engappai.2023.107351
摘要
Hyperspectral image classification (HSIC) has been a significant topic in the field of remote sensing in the past few years. Convolutional neural networks have shown promising performance in HSIC applications due to their strong local feature extraction ability. However, they struggle to extract global information from HSIs, thereby resulting in classification performance limitations. Recently, vision transformers have been used to solve HSIC problems, and its advantage is to adopt the multi-head self-attention mechanism to explore global dependencies. Nevertheless, the extracted features using MHSA usually exhibit over-dispersion due to the abundance of band information hidden in HSIs. In this work, we propose a novel method, called dual attention transformer network (DATN), for HSIC problems. It consists of two types of modules, namely the spatial–spectral hybrid transformer (SSHT) module and the spectral local-conv block (SLCB) module. Specifically, the SSHT module aims to utilize the MHSA to capture spatial and spectral feature information. Therefore, it can effectively utilize global spatial–spectral features and embed the local spatial information, simultaneously. Besides, we design a SLCB module to extract the local spectral information of HSIs effectively. Then the SSHT and SLCB modules are integrated into an end-to-end framework. Finally, the global and local spatial–spectral features extracted from this framework are input into the fully connected layer, and then classification results of HSIs are obtained. A series of experiments on three HSI datasets have demonstrated that our DATN approach outperforms several state-of-the-art HSIC approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI