亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

ConViT: improving vision transformers with soft convolutional inductive biases*

计算机科学 地点 归纳偏置 人工智能 卷积神经网络 变压器 机器学习 模式识别(心理学) 多任务学习 任务(项目管理) 管理 电压 经济 哲学 物理 量子力学 语言学
作者
Stéphane d’Ascoli,Hugo Touvron,Matthew L. Leavitt,Ari S. Morcos,Giulio Biroli,Levent Sagun
出处
期刊:Journal of Statistical Mechanics: Theory and Experiment [IOP Publishing]
卷期号:2022 (11): 114005-114005 被引量:395
标识
DOI:10.1088/1742-5468/ac9830
摘要

Abstract Convolutional architectures have proven to be extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision transformers rely on more flexible self-attention layers, and have recently outperformed CNNs for image classification. However, they require costly pre-training on large external datasets or distillation from pre-trained convolutional networks. In this paper, we ask the following question: is it possible to combine the strengths of these two architectures while avoiding their respective limitations? To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a ‘soft’ convolutional inductive bias. We initialize the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. The resulting convolutional-like ViT architecture, ConViT , outperforms the DeiT (Touvron et al 2020 arXiv: 2012.12877 ) on ImageNet, while offering a much improved sample efficiency. We further investigate the role of locality in learning by first quantifying how it is encouraged in vanilla self-attention layers, then analyzing how it has escaped in GPSA layers. We conclude by presenting various ablations to better understand the success of the ConViT. Our code and models are released publicly at https://github.com/facebookresearch/convit .

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
美美完成签到,获得积分20
1秒前
4秒前
6秒前
8秒前
BeanHahn发布了新的文献求助10
8秒前
9秒前
阿离完成签到,获得积分10
10秒前
12秒前
无题完成签到,获得积分10
12秒前
辉辉发布了新的文献求助10
13秒前
15秒前
16秒前
18秒前
科研通AI6应助科研通管家采纳,获得10
19秒前
小蘑菇应助科研通管家采纳,获得10
19秒前
20秒前
21秒前
chenyue233完成签到,获得积分10
21秒前
specium发布了新的文献求助10
23秒前
chenyue233发布了新的文献求助10
27秒前
大个应助ECD采纳,获得10
28秒前
29秒前
34秒前
BeanHahn完成签到,获得积分10
37秒前
_u_ii发布了新的文献求助10
38秒前
辉辉完成签到,获得积分10
38秒前
40秒前
Orange应助Eris采纳,获得10
41秒前
44秒前
zcr完成签到,获得积分10
45秒前
久等雨归完成签到,获得积分10
47秒前
48秒前
52秒前
今后应助白晔采纳,获得10
52秒前
56秒前
善学以致用应助ppg123采纳,获得10
57秒前
57秒前
Eris发布了新的文献求助10
58秒前
Adc应助Aurora采纳,获得10
59秒前
曼城是冠军完成签到,获得积分10
1分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Russian Foreign Policy: Change and Continuity 800
Real World Research, 5th Edition 800
Qualitative Data Analysis with NVivo By Jenine Beekhuyzen, Pat Bazeley · 2024 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5714225
求助须知:如何正确求助?哪些是违规求助? 5221821
关于积分的说明 15272955
捐赠科研通 4865714
什么是DOI,文献DOI怎么找? 2612313
邀请新用户注册赠送积分活动 1562449
关于科研通互助平台的介绍 1519671