Light Self-Gaussian-Attention Vision Transformer for Hyperspectral Image Classification

计算机科学 人工智能 高光谱成像 模式识别(心理学) 卷积神经网络 高斯分布 特征提取 计算 核(代数) 块(置换群论) 算法 数学 几何学 量子力学 组合数学 物理
作者
Chao Ma,Minjie Wan,Jian Wu,Xiaofang Kong,Ajun Shao,Fan Wang,Qian Chen,Guohua Gu
出处
期刊:IEEE Transactions on Instrumentation and Measurement [Institute of Electrical and Electronics Engineers]
卷期号:72: 1-12 被引量:49
标识
DOI:10.1109/tim.2023.3279922
摘要

In recent years, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification due to their exceptional performance in local feature extraction. However, due to the local join and weight sharing properties of the convolution kernel, CNNs have limitations in long-distance modeling, and deeper networks tend to increase computational costs. To address these issues, this paper proposes a vision Transformer (VIT) based on the light self-Gaussian-attention (LSGA) mechanism, which extracts global deep semantic features. Firstly, the hybrid spatial-spectral Tokenizer module extracts shallow spatial-spectral features and expands image patches to generate Tokens. Next, the light self-attention uses Q (Query), X (Origin input), and X instead of Q, K (Key), and V (Value) to reduce the computation and parameters. Furthermore, to avoid the lack of location information resulting in the aliasing of central and neighborhood features, we devise Gaussian absolute position bias to simulate HSI data distribution and make the attention weight closer to the central query block. Several experiments verify the effectiveness of the proposed method, which outperforms state-of-the-art methods on four datasets. Specifically, we observed a 0.62% accuracy improvement over A2S2K and a 0.11% improvement over SSFTT. In conclusion, the proposed LSGA-VIT method demonstrates promising results in HSI classification and shows potential in addressing the issues of location-aware long-distance modeling and computational cost. Our codes are available at https://github.com/machao132/LSGA-VIT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
豆豆发布了新的文献求助10
刚刚
Panini发布了新的文献求助10
1秒前
1秒前
隐形曼青应助gs19960828采纳,获得10
1秒前
量子星尘发布了新的文献求助10
1秒前
jie发布了新的文献求助10
2秒前
小胡发布了新的文献求助10
2秒前
3秒前
湛无不盛发布了新的文献求助10
3秒前
Nicole完成签到,获得积分10
3秒前
3秒前
搜集达人应助wang采纳,获得10
3秒前
希望天下0贩的0应助Zpiao采纳,获得10
5秒前
小鹿发布了新的文献求助10
5秒前
6秒前
6秒前
我是老大应助雲雀采纳,获得10
7秒前
9秒前
9秒前
wanci应助yy采纳,获得10
10秒前
10秒前
10秒前
在水一方应助zhao采纳,获得10
11秒前
桐桐应助哭泣朝雪采纳,获得10
11秒前
科目三应助室内设计采纳,获得10
12秒前
12秒前
dddsss完成签到,获得积分10
12秒前
好运莲莲发布了新的文献求助10
12秒前
小兰发布了新的文献求助10
13秒前
13秒前
科研通AI6.1应助Colden采纳,获得20
14秒前
小胡完成签到,获得积分10
14秒前
14秒前
14秒前
14秒前
刘举慧发布了新的文献求助10
15秒前
15秒前
科研菜鸟发布了新的文献求助10
16秒前
黑柴是柴发布了新的文献求助10
16秒前
文静的海完成签到,获得积分10
17秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 3000
Relation between chemical structure and local anesthetic action: tertiary alkylamine derivatives of diphenylhydantoin 1000
Signals, Systems, and Signal Processing 610
Discrete-Time Signals and Systems 610
Principles of town planning : translating concepts to applications 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6065275
求助须知:如何正确求助?哪些是违规求助? 7897408
关于积分的说明 16320704
捐赠科研通 5207775
什么是DOI,文献DOI怎么找? 2786093
邀请新用户注册赠送积分活动 1768840
关于科研通互助平台的介绍 1647702