期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers] 日期:2024-01-01卷期号:62: 1-14
标识
DOI:10.1109/tgrs.2024.3362356
摘要
Due to its ability to capture long-range dependencies, self-attention mechanism based transformer models are introduced for hyperspectral image classification. However, the self-attention mechanism has only spatial adaptability but ignores channel adaptability, thus cannot well extract complex spectral-spatial information in hyperspectral images. To tackle this problem, in this paper, we propose a novel spectral-spatial large kernel attention network (SSLKA) for hyperspectral image classification. SSLKA consists of two consecutive cooperative spectral-spatial attention blocks with large convolution kernels, which can efficiently extract features in spectral and spatial domains simultaneously. In each cooperative spectral-spatial attention block, we employ the spectral attention branch and the spatial attention branch to generate the attention maps, respectively, and then fuse the extracted spatial features with the spectral features. With large kernel attention, we can enhance the classification performance by fully exploiting local contextual information, capturing long-range dependencies, as well as be adaptive in the channel dimension. Experimental results on widely used benchmark datasets show that our method achieves higher classification accuracy in terms of overall accuracy, average accuracy, and Kappa than several state-of-the-art methods.