Spiking ViT: spiking neural networks with transformer—attention for steel surface defect classification

尖峰神经网络 人工神经网络 计算机科学 人工智能 模式识别(心理学) 分类 编码器 变压器 电压 工程类 电气工程 操作系统
作者
Liang Gong,Hang Dong,Xinyu Zhang,Xin Cheng,Fan Ye,Liangchao Guo,Zhenghui Ge
出处
期刊:Journal of Electronic Imaging [SPIE]
卷期号:33 (03) 被引量:5
标识
DOI:10.1117/1.jei.33.3.033001
摘要

Throughout the steel production process, a variety of surface defects inevitably occur. These defects can impair the quality of steel products and reduce manufacturing efficiency. Therefore, it is crucial to study and categorize the multiple defects on the surface of steel strips. Vision transformer (ViT) is a unique neural network model based on a self-attention mechanism that is widely used in many different disciplines. Conventional ViT ignores the specifics of brain signaling and instead uses activation functions to simulate genuine neurons. One of the fundamental building blocks of a spiking neural network is leaky integration and fire (LIF), which has biodynamic characteristics akin to those of a genuine neuron. LIF neurons work in an event-driven manner such that higher performance can be achieved with less power. The goal of this work is to integrate ViT and LIF neurons to build and train an end-to-end hybrid network architecture, spiking vision transformer (S-ViT), for the classification of steel surface defects. The framework relies on the ViT architecture by replacing the activation functions used in ViT with LIF neurons, constructing a global spike feature fusion module spiking transformer encoder as well as a spiking-MLP classification head for implementing the classification functionality and using it as a basic building block of S-ViT. Based on the experimental results, our method has demonstrated outstanding classification performance across all metrics. The overall test accuracies of S-ViT are 99.41%, 99.65%, 99.54%, and 99.77% on NEU-CLSs, and 95.70%, 95.93%, 96.94%, and 97.19% on XSDD. S-ViT achieves superior classification performance compared to convolutional neural networks and recent findings. Its performance is also improved relative to the original ViT model. Furthermore, the robustness test results of S-ViT show that S-ViT still maintains reliable accuracy when recognizing images that contain Gaussian noise.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小马甲应助怡然千琴采纳,获得10
刚刚
春樹暮雲发布了新的文献求助10
1秒前
1秒前
1秒前
上官若男应助内向的昊焱采纳,获得30
1秒前
1秒前
潇洒的土豆完成签到,获得积分10
1秒前
日出完成签到,获得积分20
2秒前
guard发布了新的文献求助100
2秒前
dejavu完成签到,获得积分10
2秒前
4秒前
nenoaowu发布了新的文献求助10
5秒前
5秒前
balancesy完成签到,获得积分10
5秒前
5秒前
urologistwzy应助小雨快跑采纳,获得20
5秒前
曦梦源发布了新的文献求助10
5秒前
6秒前
巴卡巴卡发布了新的文献求助10
7秒前
一颗蘑古力完成签到 ,获得积分10
7秒前
学者宫Sir完成签到,获得积分10
8秒前
小xy发布了新的文献求助10
9秒前
9秒前
123发布了新的文献求助10
9秒前
宁夕发布了新的文献求助10
9秒前
易落发布了新的文献求助10
9秒前
我是老大应助nenoaowu采纳,获得10
9秒前
小青椒应助天真的冬寒采纳,获得20
10秒前
11秒前
shaadoushi完成签到 ,获得积分10
11秒前
11秒前
orixero应助小g不要内卷采纳,获得10
11秒前
慕青应助zhangbinhe采纳,获得10
12秒前
摩尔根的白眼果蝇关注了科研通微信公众号
12秒前
彭于晏应助xiongwc采纳,获得10
13秒前
13秒前
无语的不言完成签到 ,获得积分10
13秒前
牛牛完成签到,获得积分10
13秒前
深情安青应助科研通管家采纳,获得10
14秒前
小蘑菇应助科研通管家采纳,获得10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
On the Angular Distribution in Nuclear Reactions and Coincidence Measurements 1000
Vertébrés continentaux du Crétacé supérieur de Provence (Sud-Est de la France) 600
A complete Carnosaur Skeleton From Zigong, Sichuan- Yangchuanosaurus Hepingensis 四川自贡一完整肉食龙化石-和平永川龙 600
Le transsexualisme : étude nosographique et médico-légale (en PDF) 500
Elle ou lui ? Histoire des transsexuels en France 500
FUNDAMENTAL STUDY OF ADAPTIVE CONTROL SYSTEMS 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5310502
求助须知:如何正确求助?哪些是违规求助? 4454717
关于积分的说明 13861156
捐赠科研通 4342846
什么是DOI,文献DOI怎么找? 2384852
邀请新用户注册赠送积分活动 1379285
关于科研通互助平台的介绍 1347554