高光谱成像
遥感
特征提取
上下文图像分类
计算机科学
人工智能
卷积神经网络
空间分析
模式识别(心理学)
图像(数学)
地质学
作者
Yishu Peng,Kun Zhang,Bing Tu,Qianming Li,Wujing Li
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-15
被引量:31
标识
DOI:10.1109/tgrs.2022.3203476
摘要
Convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification tasks because of their excellent local spatial feature extraction capabilities. However, because it is difficult to establish dependencies between long sequences of data for CNNs, there are limitations in the process of processing hyperspectral spectral sequence features. To overcome these limitations, inspired by the Transformer model, a spatial–spectral transformer with cross-attention (CASST) method is proposed. Overall, the method consists of a dual-branch structures, i.e., spatial and spectral sequence branches. The former is used to capture fine-grained spatial information of HSI, and the latter is adopted to extract the spectral features and establish interdependencies between spectral sequences. Specifically, to enhance the consistency among features and relieve computational burden, we design a spatial–spectral cross-attention module with weighted sharing to extract the interactive spatial–spectral fusion feature intra Transformer block, while also developing a spatial–spectral weighted sharing mechanism to capture the robust semantic feature inter Transformer block. Performance evaluation experiments are conducted on three hyperspectral classification datasets, demonstrating that the CASST method achieves better accuracy than the state-of-the-art Transformer classification models and mainstream classification networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI