亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Spiking ViT: spiking neural networks with transformer—attention for steel surface defect classification

尖峰神经网络 人工神经网络 计算机科学 人工智能 模式识别(心理学) 分类 编码器 变压器 电压 工程类 电气工程 操作系统
作者
Liang Gong,Hang Dong,Xinyu Zhang,Xin Cheng,Fan Ye,Liangchao Guo,Zhenghui Ge
出处
期刊:Journal of Electronic Imaging [SPIE - International Society for Optical Engineering]
卷期号:33 (03) 被引量:5
标识
DOI:10.1117/1.jei.33.3.033001
摘要

Throughout the steel production process, a variety of surface defects inevitably occur. These defects can impair the quality of steel products and reduce manufacturing efficiency. Therefore, it is crucial to study and categorize the multiple defects on the surface of steel strips. Vision transformer (ViT) is a unique neural network model based on a self-attention mechanism that is widely used in many different disciplines. Conventional ViT ignores the specifics of brain signaling and instead uses activation functions to simulate genuine neurons. One of the fundamental building blocks of a spiking neural network is leaky integration and fire (LIF), which has biodynamic characteristics akin to those of a genuine neuron. LIF neurons work in an event-driven manner such that higher performance can be achieved with less power. The goal of this work is to integrate ViT and LIF neurons to build and train an end-to-end hybrid network architecture, spiking vision transformer (S-ViT), for the classification of steel surface defects. The framework relies on the ViT architecture by replacing the activation functions used in ViT with LIF neurons, constructing a global spike feature fusion module spiking transformer encoder as well as a spiking-MLP classification head for implementing the classification functionality and using it as a basic building block of S-ViT. Based on the experimental results, our method has demonstrated outstanding classification performance across all metrics. The overall test accuracies of S-ViT are 99.41%, 99.65%, 99.54%, and 99.77% on NEU-CLSs, and 95.70%, 95.93%, 96.94%, and 97.19% on XSDD. S-ViT achieves superior classification performance compared to convolutional neural networks and recent findings. Its performance is also improved relative to the original ViT model. Furthermore, the robustness test results of S-ViT show that S-ViT still maintains reliable accuracy when recognizing images that contain Gaussian noise.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
邢晓彤完成签到 ,获得积分10
3秒前
ddw完成签到,获得积分10
6秒前
10秒前
kei完成签到,获得积分10
12秒前
大方的契发布了新的文献求助10
21秒前
26秒前
27秒前
吞吞完成签到 ,获得积分10
31秒前
Hello应助麦麦采纳,获得10
34秒前
俏皮的采蓝完成签到,获得积分10
37秒前
汉堡包应助山楂采纳,获得10
37秒前
37秒前
所所应助科研通管家采纳,获得10
42秒前
充电宝应助科研通管家采纳,获得10
42秒前
Hello应助科研通管家采纳,获得10
42秒前
共享精神应助科研通管家采纳,获得30
42秒前
42秒前
43秒前
不与仙同完成签到 ,获得积分10
46秒前
失眠的镜子完成签到,获得积分10
47秒前
麦麦发布了新的文献求助10
47秒前
小蘑菇应助Xiong Siqi采纳,获得10
49秒前
miaomiao123完成签到 ,获得积分10
50秒前
小二郎应助奋斗小馒头采纳,获得10
53秒前
考马斯亮蓝完成签到 ,获得积分10
57秒前
大方的契发布了新的文献求助10
1分钟前
辉夜折影完成签到,获得积分10
1分钟前
忽晚完成签到 ,获得积分10
1分钟前
NLJY完成签到,获得积分10
1分钟前
1分钟前
1分钟前
葵葵完成签到,获得积分10
1分钟前
子焱完成签到 ,获得积分10
1分钟前
Xiong Siqi发布了新的文献求助10
1分钟前
小刘完成签到,获得积分10
1分钟前
111完成签到 ,获得积分10
1分钟前
1分钟前
CodeCraft应助葵葵采纳,获得10
1分钟前
阿花阿花发布了新的文献求助10
1分钟前
1分钟前
高分求助中
Aerospace Standards Index - 2025 10000
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Treatise on Geochemistry (Third edition) 1600
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 1000
List of 1,091 Public Pension Profiles by Region 981
L-Arginine Encapsulated Mesoporous MCM-41 Nanoparticles: A Study on In Vitro Release as Well as Kinetics 500
流动的新传统主义与新生代农民工的劳动力再生产模式变迁 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5454727
求助须知:如何正确求助?哪些是违规求助? 4562104
关于积分的说明 14284714
捐赠科研通 4485945
什么是DOI,文献DOI怎么找? 2457157
邀请新用户注册赠送积分活动 1447737
关于科研通互助平台的介绍 1422973