Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

安全性令牌 计算机科学 变压器 人工智能 像素 刮擦 模式识别(心理学) 词汇分析 程序设计语言 计算机网络 工程类 电气工程 电压
作者
Li Yuan,Yunpeng Chen,Tao Wang,Weihao Yu,Yujun Shi,Zihang Jiang,Francis E. H. Tay,Jiashi Feng,Shuicheng Yan
标识
DOI:10.1109/iccv48922.2021.00060
摘要

Transformers, which are popular for language modeling, have been explored for solving vision tasks recently, e.g., the Vision Transformer (ViT) for image classification. The ViT model splits each image into a sequence of tokens with fixed length and then applies multiple Transformer layers to model their global relation for classification. However, ViT achieves inferior performance to CNNs when trained from scratch on a midsize dataset like ImageNet. We find it is because: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines among neighboring pixels, leading to low training sample efficiency; 2) the redundant attention backbone design of ViT leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we propose a new Tokens-To-Token Vision Transformer (T2T-VTT), which incorporates 1) a layer-wise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (Tokens-to-Token), such that local structure represented by surrounding tokens can be modeled and tokens length can be reduced; 2) an efficient backbone with a deep-narrow structure for vision transformer motivated by CNN architecture design after empirical study. Notably, T2T-ViT reduces the parameter count and MACs of vanilla ViT by half, while achieving more than 3.0% improvement when trained from scratch on ImageNet. It also outperforms ResNets and achieves comparable performance with MobileNets by directly training on ImageNet. For example, T2T-ViT with comparable size to ResNet50 (21.5M parameters) can achieve 83.3% top1 accuracy in image resolution 384x384 on ImageNet. 1

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
天天快乐应助科研通管家采纳,获得10
刚刚
隐形曼青应助科研通管家采纳,获得10
刚刚
SciGPT应助科研通管家采纳,获得10
刚刚
研友_rLmrgn应助科研通管家采纳,获得10
刚刚
大宝君应助科研通管家采纳,获得20
刚刚
酷波er应助科研通管家采纳,获得10
刚刚
FashionBoy应助科研通管家采纳,获得10
刚刚
情怀应助科研通管家采纳,获得10
刚刚
打打应助科研通管家采纳,获得10
刚刚
桐桐应助科研通管家采纳,获得10
刚刚
大模型应助科研通管家采纳,获得10
刚刚
ding应助科研通管家采纳,获得10
刚刚
852应助科研通管家采纳,获得30
刚刚
刚刚
大模型应助科研通管家采纳,获得10
刚刚
FF完成签到,获得积分10
刚刚
大个应助科研通管家采纳,获得10
1秒前
斯文败类应助科研通管家采纳,获得10
1秒前
大模型应助科研通管家采纳,获得10
1秒前
领导范儿应助科研通管家采纳,获得10
1秒前
葉芊羽发布了新的文献求助10
1秒前
乐观小之应助sunzhuxi采纳,获得10
3秒前
3秒前
健忘的曼青关注了科研通微信公众号
6秒前
耶耶完成签到,获得积分10
8秒前
脑洞疼应助zhogwe采纳,获得10
8秒前
10秒前
10秒前
含糊的钢笔完成签到,获得积分10
10秒前
10秒前
大个应助朴实的纸飞机采纳,获得10
10秒前
邓佳鑫Alan应助俊逸青柏采纳,获得10
11秒前
12秒前
13秒前
swich完成签到,获得积分10
14秒前
852应助Bai采纳,获得10
14秒前
所所应助kendrick677采纳,获得10
14秒前
婉孝完成签到,获得积分10
14秒前
深林狼完成签到,获得积分10
15秒前
周传强发布了新的文献求助10
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
从k到英国情人 1500
Ägyptische Geschichte der 21.–30. Dynastie 1100
„Semitische Wissenschaften“? 1100
Real World Research, 5th Edition 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5735868
求助须知:如何正确求助?哪些是违规求助? 5363199
关于积分的说明 15331638
捐赠科研通 4879999
什么是DOI,文献DOI怎么找? 2622459
邀请新用户注册赠送积分活动 1571448
关于科研通互助平台的介绍 1528243