Dynamic Spatial Sparsification for Efficient Vision Transformers and Convolutional Neural Networks

计算机科学 计算 人工智能 变压器 失败 安全性令牌 卷积神经网络 特征(语言学) 模式识别(心理学) 算法 并行计算 语言学 哲学 物理 计算机安全 量子力学 电压
作者
Yongming Rao,Zuyan Liu,Wenliang Zhao,Jie Zhou,Jiwen Lu
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:45 (9): 10883-10897 被引量:24
标识
DOI:10.1109/tpami.2023.3263826
摘要

In this paper, we present a new approach for model acceleration by exploiting spatial sparsity in visual data. We observe that the final prediction in vision Transformers is only based on a subset of the most informative regions, which is sufficient for accurate image recognition. Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input to accelerate vision Transformers. Specifically, we devise a lightweight prediction module to estimate the importance of each token given the current features. The module is added to different layers to prune redundant tokens hierarchically. While the framework is inspired by our observation of the sparse attention in vision Transformers, we find that the idea of adaptive and asymmetric computation can be a general solution for accelerating various architectures. We extend our method to hierarchical models including CNNs and hierarchical vision Transformers as well as more complex dense prediction tasks. To handle structured feature maps, we formulate a generic dynamic spatial sparsification framework with progressive sparsification and asymmetric computation for different spatial locations. By applying lightweight fast paths to less informative features and expressive slow paths to important locations, we can maintain the complete structure of feature maps while significantly reducing the overall computations. Extensive experiments on diverse modern architectures and different visual tasks demonstrate the effectiveness of our proposed framework. By hierarchically pruning 66% of the input tokens, our method greatly reduces 31% ∼ 35% FLOPs and improves the throughput by over 40% while the drop of accuracy is within 0.5% for various vision Transformers. By introducing asymmetric computation, a similar acceleration can be achieved on modern CNNs and Swin Transformers. Moreover, our method achieves promising results on more complex tasks including semantic segmentation and object detection. Our results clearly demonstrate that dynamic spatial sparsification offers a new and more effective dimension for model acceleration. Code is available at https://github.com/raoyongming/DynamicViT.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
赘婿应助DTS采纳,获得10
刚刚
3秒前
酷波er应助阿六采纳,获得10
4秒前
顾矜应助超帅的又槐采纳,获得10
5秒前
wakaka应助李秋莉采纳,获得10
6秒前
阿喔完成签到,获得积分10
6秒前
大神瓜完成签到,获得积分10
6秒前
6秒前
风思雅完成签到,获得积分10
7秒前
行知完成签到,获得积分10
7秒前
牛马发布了新的文献求助10
8秒前
诚心的蜗牛完成签到,获得积分10
8秒前
8秒前
8秒前
9秒前
小蘑菇应助大方的安柏采纳,获得10
10秒前
科目三应助大方的安柏采纳,获得10
10秒前
香蕉觅云应助大方的安柏采纳,获得10
10秒前
10秒前
李爱国应助大方的安柏采纳,获得10
10秒前
研友_VZG7GZ应助行知采纳,获得10
10秒前
情怀应助大方的安柏采纳,获得30
10秒前
Akim应助大方的安柏采纳,获得10
10秒前
英俊的铭应助超帅的又槐采纳,获得10
11秒前
在水一方应助缪伟采纳,获得10
11秒前
雪梅完成签到 ,获得积分10
11秒前
天天快乐应助jess采纳,获得10
12秒前
英姑应助鲜艳的绣连采纳,获得10
12秒前
macart发布了新的文献求助10
13秒前
13秒前
jiahaixu发布了新的文献求助10
13秒前
AprilP完成签到,获得积分10
13秒前
nandao完成签到 ,获得积分10
14秒前
yang发布了新的文献求助10
15秒前
dong发布了新的文献求助10
17秒前
一朵海棠花完成签到,获得积分10
18秒前
18秒前
健忘洋葱完成签到 ,获得积分10
18秒前
Jinagu完成签到 ,获得积分10
19秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
PowerCascade: A Synthetic Dataset for Cascading Failure Analysis in Power Systems 2000
Picture this! Including first nations fiction picture books in school library collections 1500
Instituting Science: The Cultural Production of Scientific Disciplines 666
Signals, Systems, and Signal Processing 610
The Organization of knowledge in modern America, 1860-1920 / 600
Unlocking Chemical Thinking: Reimagining Chemistry Teaching and Learning 555
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6360351
求助须知:如何正确求助?哪些是违规求助? 8174573
关于积分的说明 17218162
捐赠科研通 5415407
什么是DOI,文献DOI怎么找? 2865917
邀请新用户注册赠送积分活动 1843138
关于科研通互助平台的介绍 1691313