ScopeViT: Scale-aware Vision Transformer

计算机科学 人工智能 计算机视觉 变压器 工程类 电气工程 电压
作者
Xingju Nie,Xi Chen,Haoyuan Jin,Zhihang Zhu,Donglian Qi,Yunfeng Yan
出处
期刊:Pattern Recognition [Elsevier]
卷期号:: 110470-110470 被引量:5
标识
DOI:10.1016/j.patcog.2024.110470
摘要

Multi-scale features are essential for various vision tasks, such as classification, detection, and segmentation. Although Vision Transformers (ViTs) show remarkable success in capturing global features within an image, how to leverage multi-scale features in Transformers is not well explored. This paper proposes a scale-aware vision Transformer called ScopeViT that efficiently captures multi-granularity representations. Two novel attention with lightweight computation are introduced: Multi-Scale Self-Attention (MSSA) and Global-Scale Dilated Attention (GSDA). MSSA embeds visual tokens with different receptive fields into distinct attention heads, allowing the model to perceive various scales across the network. GSDA enhances model understanding of the global context through token-dilation operation, which reduces the number of tokens involved in attention computations. This dual attention method enables ScopeViT to "see" various scales throughout the entire network and effectively learn inter-object relationships, reducing heavy quadratic computational complexity. Extensive experiments demonstrate that ScopeViT achieves competitive complexity/accuracy trade-offs compared to existing networks across a wide range of visual tasks. On the ImageNet-1K dataset, ScopeViT achieves a top-1 accuracy of 81.1%, using only 7.4M parameters and 2.0G FLOPs. Our approach outperforms Swin (ViT-based) by 1.9% accuracy while saving 42% of the parameters, outperforms MobileViTv2 (Hybrid-based) with a 0.7% accuracy gain while using 50% of the computations, and also beats ConvNeXt V2 (ConvNet-based) by 0.8% with fewer parameters. Our code is available on GitHub.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
宇宙暴龙战士暴打魔法少女完成签到,获得积分10
刚刚
1秒前
2秒前
hh应助科研通管家采纳,获得10
2秒前
科研通AI5应助科研通管家采纳,获得10
2秒前
Ava应助科研通管家采纳,获得10
2秒前
Eva完成签到,获得积分10
2秒前
传奇3应助科研通管家采纳,获得10
2秒前
斯文败类应助科研通管家采纳,获得10
2秒前
爆米花应助科研通管家采纳,获得10
3秒前
科研通AI5应助科研通管家采纳,获得10
3秒前
搜集达人应助科研通管家采纳,获得10
3秒前
思源应助科研通管家采纳,获得10
3秒前
汉堡包应助科研通管家采纳,获得10
3秒前
清爽老九应助科研通管家采纳,获得20
3秒前
传奇3应助科研通管家采纳,获得10
3秒前
greenPASS666发布了新的文献求助10
3秒前
涂欣桐应助科研通管家采纳,获得10
3秒前
英俊的铭应助科研通管家采纳,获得10
3秒前
secbox完成签到,获得积分10
4秒前
刘哈哈发布了新的文献求助30
4秒前
xyzdmmm完成签到,获得积分10
5秒前
5秒前
欢呼冰岚发布了新的文献求助30
6秒前
xiongdi521发布了新的文献求助10
6秒前
honeybee完成签到,获得积分10
8秒前
兔子完成签到,获得积分10
9秒前
汉关发布了新的文献求助10
9秒前
NexusExplorer应助WZ0904采纳,获得10
10秒前
xiongdi521完成签到,获得积分10
11秒前
11秒前
ding应助奋斗的小林采纳,获得10
11秒前
超帅曼柔完成签到,获得积分10
11秒前
酷波er应助xg采纳,获得10
12秒前
听话的亦瑶完成签到,获得积分10
13秒前
龙江游侠完成签到,获得积分10
13秒前
小蘑菇应助honeybee采纳,获得10
14秒前
Agernon应助超帅曼柔采纳,获得10
14秒前
15秒前
jella完成签到,获得积分10
16秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Ensartinib (Ensacove) for Non-Small Cell Lung Cancer 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
Bacterial collagenases and their clinical applications 800
El viaje de una vida: Memorias de María Lecea 800
Luis Lacasa - Sobre esto y aquello 700
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3527998
求助须知:如何正确求助?哪些是违规求助? 3108225
关于积分的说明 9288086
捐赠科研通 2805889
什么是DOI,文献DOI怎么找? 1540195
邀请新用户注册赠送积分活动 716950
科研通“疑难数据库(出版商)”最低求助积分说明 709849