Dynamic Spatial Sparsification for Efficient Vision Transformers and Convolutional Neural Networks

计算机科学 计算 人工智能 变压器 失败 安全性令牌 卷积神经网络 特征(语言学) 模式识别(心理学) 算法 并行计算 量子力学 语言学 物理 哲学 计算机安全 电压
作者
Yongming Rao,Zuyan Liu,Wenliang Zhao,Jie Zhou,Jiwen Lu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:45 (9): 10883-10897 被引量:24
标识
DOI:10.1109/tpami.2023.3263826
摘要

In this paper, we present a new approach for model acceleration by exploiting spatial sparsity in visual data. We observe that the final prediction in vision Transformers is only based on a subset of the most informative regions, which is sufficient for accurate image recognition. Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input to accelerate vision Transformers. Specifically, we devise a lightweight prediction module to estimate the importance of each token given the current features. The module is added to different layers to prune redundant tokens hierarchically. While the framework is inspired by our observation of the sparse attention in vision Transformers, we find that the idea of adaptive and asymmetric computation can be a general solution for accelerating various architectures. We extend our method to hierarchical models including CNNs and hierarchical vision Transformers as well as more complex dense prediction tasks. To handle structured feature maps, we formulate a generic dynamic spatial sparsification framework with progressive sparsification and asymmetric computation for different spatial locations. By applying lightweight fast paths to less informative features and expressive slow paths to important locations, we can maintain the complete structure of feature maps while significantly reducing the overall computations. Extensive experiments on diverse modern architectures and different visual tasks demonstrate the effectiveness of our proposed framework. By hierarchically pruning 66% of the input tokens, our method greatly reduces 31% ∼ 35% FLOPs and improves the throughput by over 40% while the drop of accuracy is within 0.5% for various vision Transformers. By introducing asymmetric computation, a similar acceleration can be achieved on modern CNNs and Swin Transformers. Moreover, our method achieves promising results on more complex tasks including semantic segmentation and object detection. Our results clearly demonstrate that dynamic spatial sparsification offers a new and more effective dimension for model acceleration. Code is available at https://github.com/raoyongming/DynamicViT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
英俊的铭应助飞飞鱼采纳,获得10
2秒前
科目三应助zzz采纳,获得10
2秒前
小小西瓜萝卜青菜完成签到,获得积分10
2秒前
思源应助虚幻采枫采纳,获得10
3秒前
3秒前
不安的可乐完成签到,获得积分10
3秒前
4秒前
nano完成签到 ,获得积分10
4秒前
da完成签到,获得积分10
4秒前
科研通AI2S应助啊啊啊啊采纳,获得10
5秒前
cyndi发布了新的文献求助20
5秒前
6秒前
852应助小小西瓜萝卜青菜采纳,获得10
8秒前
sci完成签到,获得积分10
9秒前
醉熏的鑫发布了新的文献求助10
10秒前
Nizarn发布了新的文献求助10
10秒前
11秒前
11秒前
乐呵呵完成签到,获得积分10
11秒前
11秒前
忧心的惜天完成签到 ,获得积分10
11秒前
77完成签到,获得积分10
12秒前
yz发布了新的文献求助10
12秒前
周周南完成签到 ,获得积分10
15秒前
15秒前
zz完成签到,获得积分10
15秒前
自由老头应助cyndi采纳,获得20
16秒前
努力做实验完成签到 ,获得积分10
17秒前
wf发布了新的文献求助10
17秒前
研友_ngkEgn完成签到,获得积分10
17秒前
代丽娟完成签到,获得积分10
17秒前
19秒前
快来下载文献完成签到,获得积分10
19秒前
苹果山芙完成签到,获得积分10
20秒前
煎饼狗子发布了新的文献求助10
20秒前
r41r32完成签到 ,获得积分10
21秒前
22秒前
QQ完成签到,获得积分10
23秒前
自由橘子发布了新的文献求助10
24秒前
量子星尘发布了新的文献求助50
24秒前
高分求助中
【提示信息,请勿应助】关于scihub 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
徐淮辽南地区新元古代叠层石及生物地层 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Handbook of Industrial Diamonds第二卷 1200
Global Eyelash Assessment scale (GEA) 1000
Picture Books with Same-sex Parented Families: Unintentional Censorship 550
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4038657
求助须知:如何正确求助?哪些是违规求助? 3576306
关于积分的说明 11375198
捐赠科研通 3306108
什么是DOI,文献DOI怎么找? 1819379
邀请新用户注册赠送积分活动 892698
科研通“疑难数据库(出版商)”最低求助积分说明 815066