UniFormer: Unifying Convolution and Self-Attention for Visual Recognition

计算机科学 冗余(工程) 人工智能 判别式 模式识别(心理学) 变压器 机器学习 量子力学 操作系统 物理 电压
作者
Kunchang Li,Yali Wang,Junhao Zhang,Peng Gao,Guanglu Song,Yu Liu,Hongsheng Li,Yu Qiao
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-18 被引量:198
标识
DOI:10.1109/tpami.2023.3282631
摘要

It is a challenging task to learn discriminative representation from images and videos, due to large local redundancy and complex global dependency in these visual data. Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years. Though CNNs can efficiently decrease local redundancy by convolution within a small neighborhood, the limited receptive field makes it hard to capture global dependency. Alternatively, ViTs can effectively capture long-range dependency via self-attention, while blind similarity comparisons among all the tokens lead to high redundancy. To resolve these problems, we propose a novel Unified transFormer (UniFormer), which can seamlessly integrate the merits of convolution and self-attention in a concise transformer format. Different from the typical transformer blocks, the relation aggregators in our UniFormer block are equipped with local and global token affinity respectively in shallow and deep layers, allowing tackling both redundancy and dependency for efficient and effective representation learning. Finally, we flexibly stack our blocks into a new powerful backbone, and adopt it for various vision tasks from image to video domain, from classification to dense prediction. Without any extra training data, our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1 K classification task. With only ImageNet-1 K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. It obtains 82.9/84.8 top-1 accuracy on Kinetics-400/600, 60.9/71.2 top-1 accuracy on Something-Something V1/V2 video classification tasks, 53.8 box AP and 46.4 mask AP on COCO object detection task, 50.8 mIoU on ADE20 K semantic segmentation task, and 77.4 AP on COCO pose estimation task. Moreover, we build an efficient UniFormer with a concise hourglass design of token shrinking and recovering, which achieves 2-4 $\bm {\times }$ higher throughput than the recent lightweight models. Code is available at https://github.com/Sense-X/UniFormer .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
晓先森完成签到,获得积分10
1秒前
von完成签到,获得积分10
1秒前
小yi又困啦完成签到 ,获得积分10
1秒前
娜娜完成签到 ,获得积分10
2秒前
ENIX完成签到 ,获得积分10
3秒前
3秒前
令狐凝阳发布了新的文献求助10
3秒前
lyl完成签到,获得积分10
4秒前
平常无颜完成签到,获得积分10
4秒前
pp完成签到,获得积分10
5秒前
无限毛豆完成签到 ,获得积分10
5秒前
甩看文献完成签到,获得积分10
6秒前
无一完成签到 ,获得积分0
7秒前
Yjh完成签到,获得积分10
8秒前
酷波er应助LIUYI采纳,获得10
8秒前
e746700020完成签到,获得积分10
8秒前
xrkxrk完成签到 ,获得积分0
9秒前
无辜秋珊发布了新的文献求助10
9秒前
ks完成签到,获得积分10
10秒前
luoqin完成签到 ,获得积分10
11秒前
杨琴完成签到,获得积分10
12秒前
秀丽焦完成签到 ,获得积分10
14秒前
英姑应助小小采纳,获得10
14秒前
杂菜流完成签到,获得积分10
15秒前
蓝桉完成签到 ,获得积分10
15秒前
meng完成签到 ,获得积分10
16秒前
铮铮完成签到,获得积分10
16秒前
棉花不是花完成签到,获得积分10
17秒前
ssassassassa完成签到 ,获得积分10
18秒前
猪小猪完成签到,获得积分10
19秒前
song完成签到 ,获得积分10
20秒前
actor2006完成签到,获得积分10
20秒前
不会学习的小郭完成签到 ,获得积分10
22秒前
dreamdraver完成签到,获得积分10
22秒前
pear完成签到,获得积分10
22秒前
小张完成签到,获得积分10
22秒前
李爱国应助Leif采纳,获得10
24秒前
甜晞完成签到,获得积分10
24秒前
睡到人间煮饭时完成签到 ,获得积分10
24秒前
瞿采枫完成签到 ,获得积分10
25秒前
高分求助中
Continuum Thermodynamics and Material Modelling 2000
Neuromuscular and Electrodiagnostic Medicine Board Review 1000
こんなに痛いのにどうして「なんでもない」と医者にいわれてしまうのでしょうか 510
Questioning in the Primary School 500
いちばんやさしい生化学 500
The First Nuclear Era: The Life and Times of a Technological Fixer 500
频率源分析与设计 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3686984
求助须知:如何正确求助?哪些是违规求助? 3237272
关于积分的说明 9829991
捐赠科研通 2949177
什么是DOI,文献DOI怎么找? 1617263
邀请新用户注册赠送积分活动 764208
科研通“疑难数据库(出版商)”最低求助积分说明 738360