SFAM: Lightweight Spectrum Unreferenced Attention Network

计算机科学 人工智能 离散余弦变换 模式识别(心理学) 变压器 计算复杂性理论 频域 人工神经网络 特征提取 特征(语言学) 图像(数学) 算法 计算机视觉 工程类 哲学 电气工程 语言学 电压
作者
Xuanhao Qi,Min Zhi,Y. Yin,Ping Ping,Y. Zhang
标识
DOI:10.1145/3652583.3658006
摘要

The construction of deep neural networks depends on a significant number of parameters and computational complexity, which poses a challenge in the field of image processing. To address the issue of the Transformer network model's large size and inability to effectively capture local features of the image, this paper proposes a lightweight composite Transformer structure that combines a spectral feature refinement module (SFRM) and a parameterless attention augmentation module (PAAM). The SFRM and PAAM work together to improve the quality of the spectral features used in the transformer. The proposed structure aims to enhance the performance of the transformer without adding unnecessary complexity. The SFRM utilises the two-dimensional discrete cosine transform to convert the image from the spatial domain to the frequency domain. This process extracts both the overall image structure and detailed feature information from the high-frequency and low-frequency regions, respectively. The aim is to purify the spatially-insignificant features in the original image. The PAAM introduces a parameter-free channel, spatial, and 3D attention enhancement mechanism to extract correlation features of local information in the spatial domain without increasing the number of parameters. This improves the expression of local features in the image. Additionally, Depth Separable (DConv MLP) is introduced to further reduce the network model's weight. The experimental results show that the proposed algorithm achieves an accuracy of 79.6% on the ImageNet-1K dataset, 91.6% on the Oxford 102 Flower Dataset, and 94.1% on the CIFAR-10 dataset. Compared to ViT-B, Swin-T, and CSwin-T, respectively, the number of covariates decreases by 86.11%, 58.62%, and 47.83%. The number of parameters is also lower than VGG-16 and ResNet-110 by 91.07% and 77.70%, respectively.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
王志鹏完成签到 ,获得积分10
2秒前
ABCofMEDICIBE发布了新的文献求助10
2秒前
bowen发布了新的文献求助10
3秒前
zc关闭了zc文献求助
3秒前
马慧敏发布了新的文献求助10
4秒前
千xi完成签到,获得积分10
4秒前
4秒前
负责的哑铃完成签到,获得积分10
4秒前
研究牲发布了新的文献求助10
5秒前
葡萄炖雪梨完成签到 ,获得积分10
5秒前
yyt发布了新的文献求助10
5秒前
眯眯眼的衬衫应助晨曦采纳,获得10
5秒前
5秒前
Akim应助llllt采纳,获得10
6秒前
6秒前
白椋发布了新的文献求助10
7秒前
Afei完成签到,获得积分10
7秒前
8秒前
小鱼完成签到 ,获得积分10
8秒前
虚幻盼雁完成签到 ,获得积分10
8秒前
8秒前
hong完成签到,获得积分10
8秒前
迷人路灯完成签到,获得积分10
9秒前
EdmundLily完成签到,获得积分10
10秒前
香蕉觅云应助huihui采纳,获得10
10秒前
10秒前
11秒前
宝宝熊的熊宝宝完成签到,获得积分10
11秒前
李爱国应助缥缈鞯采纳,获得10
11秒前
11秒前
英姑应助嗄巧采纳,获得10
11秒前
12秒前
唐同学发布了新的文献求助10
12秒前
任性的诗柳完成签到 ,获得积分10
12秒前
所所应助摇光采纳,获得10
12秒前
幽幽完成签到,获得积分10
12秒前
13秒前
3139813319完成签到,获得积分10
13秒前
Pfuz发布了新的文献求助10
13秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Cognitive Neuroscience: The Biology of the Mind 1000
Cognitive Neuroscience: The Biology of the Mind (Sixth Edition) 1000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3958693
求助须知:如何正确求助?哪些是违规求助? 3504939
关于积分的说明 11121216
捐赠科研通 3236311
什么是DOI,文献DOI怎么找? 1788726
邀请新用户注册赠送积分活动 871307
科研通“疑难数据库(出版商)”最低求助积分说明 802691