失败
变压器
计算机科学
过程(计算)
人工智能
计算机工程
算法
并行计算
工程类
电气工程
程序设计语言
电压
作者
Chavan, Arnav,Zhi-Qiang Shen,Zhuang Liu,Zechun Liu,Kwang-Ting Cheng,Eric P. Xing
出处
期刊:Cornell University - arXiv
日期:2022-01-03
标识
DOI:10.48550/arxiv.2201.00814
摘要
This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework. It can search a sub-structure from the original model end-to-end across multiple dimensions, including the input tokens, MHSA and MLP modules with state-of-the-art performance. Our method is based on a learnable and unified $\ell_1$ sparsity constraint with pre-defined factors to reflect the global importance in the continuous searching space of different dimensions. The searching process is highly efficient through a single-shot training scheme. For instance, on DeiT-S, ViT-Slim only takes ~43 GPU hours for the searching process, and the searched structure is flexible with diverse dimensionalities in different modules. Then, a budget threshold is employed according to the requirements of accuracy-FLOPs trade-off on running devices, and a re-training process is performed to obtain the final model. The extensive experiments show that our ViT-Slim can compress up to 40% of parameters and 40% FLOPs on various vision transformers while increasing the accuracy by ~0.6% on ImageNet. We also demonstrate the advantage of our searched models on several downstream datasets. Our code is available at https://github.com/Arnav0400/ViT-Slim.
科研通智能强力驱动
Strongly Powered by AbleSci AI