VOLO: Vision Outlooker for Visual Recognition

计算机科学 人工智能 计算 模式识别(心理学) 瓶颈 特征(语言学) 安全性令牌 变压器 算法 计算机安全 语言学 量子力学 物理 哲学 嵌入式系统 电压
作者
Li Yuan,Qibin Hou,Zihang Jiang,Jiashi Feng,Shuicheng Yan
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-13 被引量:200
标识
DOI:10.1109/tpami.2022.3206108
摘要

Recently, Vision Transformers (ViTs) have been broadly explored in visual recognition. With low efficiency in encoding fine-level features, the performance of ViTs is still inferior to the state-of-the-art CNNs when trained from scratch on a midsize dataset like ImageNet. Through experimental analysis, we find it is because of two reasons: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines, leading to low training sample efficiency; 2) the redundant attention backbone design of ViTs leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we present a new simple and generic architecture, termed Vision Outlooker (VOLO), which implements a novel outlook attention operation that dynamically conduct the local feature aggregation mechanism in a sliding window manner across the input image. Unlike self-attention that focuses on modeling global dependencies of local features at a coarse level, our outlook attention targets at encoding finer-level features, which is critical for recognition but ignored by self-attention. Outlook attention breaks the bottleneck of self-attention whose computation cost scales quadratically with the input spatial dimension, and thus is much more memory efficient. Compared to our Tokens-To-Token Vision Transformer (T2T-ViT), VOLO can more efficiently encode fine-level features that are essential for high-performance visual recognition. Experiments show that with only 26.6 M learnable parameters, VOLO achieves 84.2% top-1 accuracy on ImageNet-1 K without using extra training data, 2.7% better than T2T-ViT with a comparable number of parameters. When the model size is scaled up to 296 M parameters, its performance can be further improved to 87.1%, setting a new record for ImageNet-1 K classification. In addition, we also take the proposed VOLO as pretrained models and report superior performance on downstream tasks, such as semantic segmentation. Code is available at https://github.com/sail-sg/volo.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
爆米花应助xixi采纳,获得10
1秒前
CodeCraft应助悲凉的小馒头采纳,获得10
2秒前
5秒前
5秒前
隐形曼青应助夏天采纳,获得10
5秒前
1526918042完成签到 ,获得积分10
6秒前
7秒前
7秒前
好天气完成签到,获得积分10
8秒前
8秒前
小蘑菇应助勤恳的依珊采纳,获得10
8秒前
8秒前
11111111发布了新的文献求助10
9秒前
Lu完成签到,获得积分10
10秒前
MichelleLu完成签到,获得积分10
10秒前
normankasimodo完成签到,获得积分10
11秒前
流云发布了新的文献求助10
12秒前
12秒前
12秒前
xiaosu发布了新的文献求助10
13秒前
xiaowang发布了新的文献求助20
14秒前
幽默的棒球完成签到,获得积分10
14秒前
威武好吐司完成签到 ,获得积分10
14秒前
lanbing802完成签到,获得积分10
15秒前
by完成签到,获得积分10
18秒前
19秒前
20秒前
叶子宁完成签到,获得积分10
20秒前
21秒前
小叶完成签到 ,获得积分10
21秒前
21秒前
22秒前
wwz应助33采纳,获得20
22秒前
dg_fisher发布了新的文献求助10
23秒前
23秒前
24秒前
橘子发布了新的文献求助10
25秒前
完美世界应助与闲采纳,获得20
25秒前
抹茶味的奶酥完成签到,获得积分10
27秒前
11111111完成签到,获得积分10
27秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
A Modern Guide to the Economics of Crime 500
PARLOC2001: The update of loss containment data for offshore pipelines 500
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 500
A Manual for the Identification of Plant Seeds and Fruits : Second revised edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5271588
求助须知:如何正确求助?哪些是违规求助? 4429244
关于积分的说明 13787991
捐赠科研通 4307583
什么是DOI,文献DOI怎么找? 2363636
邀请新用户注册赠送积分活动 1359308
关于科研通互助平台的介绍 1322221