已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

VOLO: Vision Outlooker for Visual Recognition

计算机科学 人工智能 计算 模式识别(心理学) 瓶颈 特征(语言学) 安全性令牌 算法 计算机安全 语言学 哲学 嵌入式系统
作者
Li Yuan,Qibin Hou,Zihang Jiang,Jiashi Feng,Shuicheng Yan
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [Institute of Electrical and Electronics Engineers]
卷期号:: 1-13 被引量:159
标识
DOI:10.1109/tpami.2022.3206108
摘要

Recently, Vision Transformers (ViTs) have been broadly explored in visual recognition. With low efficiency in encoding fine-level features, the performance of ViTs is still inferior to the state-of-the-art CNNs when trained from scratch on a midsize dataset like ImageNet. Through experimental analysis, we find it is because of two reasons: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines, leading to low training sample efficiency; 2) the redundant attention backbone design of ViTs leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we present a new simple and generic architecture, termed Vision Outlooker (VOLO), which implements a novel outlook attention operation that dynamically conduct the local feature aggregation mechanism in a sliding window manner across the input image. Unlike self-attention that focuses on modeling global dependencies of local features at a coarse level, our outlook attention targets at encoding finer-level features, which is critical for recognition but ignored by self-attention. Outlook attention breaks the bottleneck of self-attention whose computation cost scales quadratically with the input spatial dimension, and thus is much more memory efficient. Compared to our Tokens-To-Token Vision Transformer (T2T-ViT), VOLO can more efficiently encode fine-level features that are essential for high-performance visual recognition. Experiments show that with only 26.6 M learnable parameters, VOLO achieves 84.2% top-1 accuracy on ImageNet-1 K without using extra training data, 2.7% better than T2T-ViT with a comparable number of parameters. When the model size is scaled up to 296 M parameters, its performance can be further improved to 87.1%, setting a new record for ImageNet-1 K classification. In addition, we also take the proposed VOLO as pretrained models and report superior performance on downstream tasks, such as semantic segmentation. Code is available at https://github.com/sail-sg/volo.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
家里有头小毛驴完成签到,获得积分10
2秒前
啊咧发布了新的文献求助10
3秒前
儒雅初兰发布了新的文献求助10
4秒前
香蕉觅云应助动听的听兰采纳,获得10
7秒前
bkagyin应助LMH采纳,获得10
9秒前
11秒前
zyq111111完成签到,获得积分10
12秒前
13秒前
啊咧完成签到,获得积分10
14秒前
14秒前
15秒前
16秒前
独特的泥猴桃完成签到,获得积分10
17秒前
陈一晨发布了新的文献求助10
17秒前
儒雅初兰完成签到,获得积分10
19秒前
Gavin完成签到,获得积分10
20秒前
JazzWon发布了新的文献求助10
21秒前
CWNU_HAN应助guard采纳,获得30
22秒前
kelvin完成签到,获得积分20
24秒前
爆米花应助lin采纳,获得30
29秒前
kelvin发布了新的文献求助50
29秒前
清脆的棒球完成签到 ,获得积分10
31秒前
5160完成签到,获得积分10
33秒前
Hudan完成签到,获得积分20
35秒前
36秒前
36秒前
高贵的雅山完成签到,获得积分10
38秒前
39秒前
深情安青应助无尘采纳,获得10
41秒前
小二郎应助无尘采纳,获得10
41秒前
独特兔子发布了新的文献求助10
41秒前
qqq完成签到 ,获得积分10
43秒前
43秒前
深情安青应助研友_8Kedgn采纳,获得10
51秒前
wuli完成签到 ,获得积分10
57秒前
57秒前
59秒前
Henry给lirj的求助进行了留言
1分钟前
1分钟前
Daisy666发布了新的文献求助20
1分钟前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
The Young builders of New china : the visit of the delegation of the WFDY to the Chinese People's Republic 1000
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
Foreign Policy of the French Second Empire: A Bibliography 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3146415
求助须知:如何正确求助?哪些是违规求助? 2797811
关于积分的说明 7825766
捐赠科研通 2454165
什么是DOI,文献DOI怎么找? 1306196
科研通“疑难数据库(出版商)”最低求助积分说明 627666
版权声明 601503