Spiking ViT: spiking neural networks with transformer—attention for steel surface defect classification

尖峰神经网络 人工神经网络 计算机科学 人工智能 模式识别(心理学) 分类 编码器 变压器 电压 工程类 电气工程 操作系统
作者
Liang Gong,Hang Dong,Xinyu Zhang,Xin Cheng,Fan Ye,Liangchao Guo,Zhenghui Ge
出处
期刊:Journal of Electronic Imaging [SPIE - International Society for Optical Engineering]
卷期号:33 (03) 被引量:5
标识
DOI:10.1117/1.jei.33.3.033001
摘要

Throughout the steel production process, a variety of surface defects inevitably occur. These defects can impair the quality of steel products and reduce manufacturing efficiency. Therefore, it is crucial to study and categorize the multiple defects on the surface of steel strips. Vision transformer (ViT) is a unique neural network model based on a self-attention mechanism that is widely used in many different disciplines. Conventional ViT ignores the specifics of brain signaling and instead uses activation functions to simulate genuine neurons. One of the fundamental building blocks of a spiking neural network is leaky integration and fire (LIF), which has biodynamic characteristics akin to those of a genuine neuron. LIF neurons work in an event-driven manner such that higher performance can be achieved with less power. The goal of this work is to integrate ViT and LIF neurons to build and train an end-to-end hybrid network architecture, spiking vision transformer (S-ViT), for the classification of steel surface defects. The framework relies on the ViT architecture by replacing the activation functions used in ViT with LIF neurons, constructing a global spike feature fusion module spiking transformer encoder as well as a spiking-MLP classification head for implementing the classification functionality and using it as a basic building block of S-ViT. Based on the experimental results, our method has demonstrated outstanding classification performance across all metrics. The overall test accuracies of S-ViT are 99.41%, 99.65%, 99.54%, and 99.77% on NEU-CLSs, and 95.70%, 95.93%, 96.94%, and 97.19% on XSDD. S-ViT achieves superior classification performance compared to convolutional neural networks and recent findings. Its performance is also improved relative to the original ViT model. Furthermore, the robustness test results of S-ViT show that S-ViT still maintains reliable accuracy when recognizing images that contain Gaussian noise.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
天天快乐应助仙女爷爷采纳,获得10
刚刚
草莓派完成签到,获得积分10
刚刚
Karma发布了新的文献求助10
刚刚
猪猪hero应助DARLING002采纳,获得10
刚刚
蔡一完成签到,获得积分10
刚刚
聪明帅哥发布了新的文献求助10
1秒前
妮儿发布了新的文献求助10
1秒前
wills完成签到,获得积分10
1秒前
BowieHuang应助wxy采纳,获得10
1秒前
qijie完成签到,获得积分10
1秒前
1秒前
2秒前
英俊的铭应助kiiso采纳,获得10
2秒前
Lucas应助ddw采纳,获得10
3秒前
3秒前
3秒前
yao chen发布了新的文献求助10
4秒前
5秒前
大模型应助科研通管家采纳,获得10
5秒前
5秒前
顾矜应助科研通管家采纳,获得10
5秒前
田様应助科研通管家采纳,获得10
5秒前
6秒前
妮儿完成签到,获得积分10
6秒前
科研通AI6应助科研通管家采纳,获得10
6秒前
脑洞疼应助科研通管家采纳,获得10
6秒前
XLL小绿绿完成签到,获得积分10
6秒前
共享精神应助科研通管家采纳,获得10
6秒前
6秒前
6秒前
科研通AI2S应助科研通管家采纳,获得10
6秒前
英姑应助科研通管家采纳,获得20
7秒前
英姑应助ngldy采纳,获得10
7秒前
小米发布了新的文献求助10
7秒前
Rjj发布了新的文献求助10
7秒前
李健应助科研通管家采纳,获得20
7秒前
7秒前
大模型应助科研通管家采纳,获得10
7秒前
英俊的铭应助科研通管家采纳,获得10
7秒前
顾矜应助科研通管家采纳,获得10
7秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Agyptische Geschichte der 21.30. Dynastie 3000
„Semitische Wissenschaften“? 1510
从k到英国情人 1500
Cummings Otolaryngology Head and Neck Surgery 8th Edition 800
Real World Research, 5th Edition 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5760635
求助须知:如何正确求助?哪些是违规求助? 5525448
关于积分的说明 15397980
捐赠科研通 4897422
什么是DOI,文献DOI怎么找? 2634176
邀请新用户注册赠送积分活动 1582268
关于科研通互助平台的介绍 1537637