Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

安全性令牌 计算机科学 变压器 人工智能 像素 刮擦 模式识别(心理学) 词汇分析 程序设计语言 计算机网络 工程类 电气工程 电压
作者
Li Yuan,Yunpeng Chen,Tao Wang,Weihao Yu,Yujun Shi,Zihang Jiang,Francis E. H. Tay,Jiashi Feng,Shuicheng Yan
标识
DOI:10.1109/iccv48922.2021.00060
摘要

Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-VTT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3% top1 accuracy in image resolution 384x384 on ImageNet. 1
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
你好关注了科研通微信公众号
刚刚
haha发布了新的文献求助10
2秒前
追寻无施完成签到,获得积分10
4秒前
大模型应助着急的棉花糖采纳,获得10
4秒前
123yyaa发布了新的文献求助10
4秒前
5秒前
huang完成签到,获得积分10
5秒前
5秒前
7秒前
糖果不甜完成签到,获得积分10
7秒前
无花果应助婷婷采纳,获得10
8秒前
9秒前
Akim应助刘佳慧采纳,获得10
11秒前
11秒前
尹天奇发布了新的文献求助10
11秒前
11秒前
田様应助科研通管家采纳,获得10
13秒前
完美世界应助科研通管家采纳,获得10
14秒前
科研通AI2S应助科研通管家采纳,获得10
14秒前
14秒前
ccm应助科研通管家采纳,获得20
14秒前
Owen应助科研通管家采纳,获得10
14秒前
科研通AI6应助科研通管家采纳,获得10
14秒前
huang发布了新的文献求助10
14秒前
科研通AI6应助科研通管家采纳,获得10
14秒前
上官若男应助科研通管家采纳,获得20
14秒前
烟花应助科研通管家采纳,获得10
14秒前
所所应助科研通管家采纳,获得10
14秒前
终梦应助科研通管家采纳,获得30
14秒前
NexusExplorer应助科研通管家采纳,获得10
14秒前
慕青应助科研通管家采纳,获得10
14秒前
Hello应助科研通管家采纳,获得10
14秒前
bkagyin应助科研通管家采纳,获得10
15秒前
15秒前
科研通AI6应助科研通管家采纳,获得10
15秒前
ccm应助科研通管家采纳,获得10
15秒前
xxfsx应助科研通管家采纳,获得10
15秒前
科目三应助科研通管家采纳,获得10
15秒前
完美又槐应助科研通管家采纳,获得10
15秒前
英姑应助科研通管家采纳,获得10
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
微纳米加工技术及其应用 500
Constitutional and Administrative Law 500
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Vertebrate Palaeontology, 5th Edition 420
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5288354
求助须知:如何正确求助?哪些是违规求助? 4440235
关于积分的说明 13824120
捐赠科研通 4322496
什么是DOI,文献DOI怎么找? 2372594
邀请新用户注册赠送积分活动 1368040
关于科研通互助平台的介绍 1331818