DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition

计算机科学 变压器 人工智能 冗余(工程) 利用 理论计算机科学 模式识别(心理学) 计算机视觉 物理 量子力学 操作系统 电压 计算机安全
作者
Jiayu Jiao,Yu-Ming Tang,Kun-Yu Lin,Yipeng Gao,J. Andy,Yaowei Wang,Wei‐Shi Zheng
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:25: 8906-8919 被引量:299
标识
DOI:10.1109/tmm.2023.3243616
摘要

As a de facto solution, the vanilla Vision Transformers (ViTs) are encouraged to model long-range dependencies between arbitrary image patches while the global attended receptive field leads to quadratic computational cost. Another branch of Vision Transformers exploits local attention inspired by CNNs, which only models the interactions between patches in small neighborhoods. Although such a solution reduces the computational cost, it naturally suffers from small attended receptive fields, which may limit the performance. In this work, we explore effective Vision Transformers to pursue a preferable trade-off between the computational complexity and size of the attended receptive field. By analyzing the patch interaction of global attention in ViTs, we observe two key properties in the shallow layers, namely locality and sparsity, indicating the redundancy of global dependency modeling in shallow layers of ViTs. Accordingly, we propose Multi-Scale Dilated Attention (MSDA) to model local and sparse patch interaction within the sliding window. With a pyramid architecture, we construct a Multi-Scale Dilated Transformer (DilateFormer) by stacking MSDA blocks at low-level stages and global multi-head self-attention blocks at high-level stages. Our experiment results show that our DilateFormer achieves state-of-the-art performance on various vision tasks. On ImageNet-1 K classification task, DilateFormer achieves comparable performance with 70% fewer FLOPs compared with existing state-of-the-art models. Our DilateFormer-Base achieves 85.6% top-1 accuracy on ImageNet-1 K classification task, 53.5% box mAP/46.1% mask mAP on COCO object detection/instance segmentation task and 51.1% MS mIoU on ADE20 K semantic segmentation task.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
LEGEND完成签到,获得积分10
1秒前
qq完成签到,获得积分10
1秒前
DDD完成签到,获得积分10
3秒前
3秒前
尊敬秋双完成签到 ,获得积分10
3秒前
4秒前
大力幻悲发布了新的文献求助10
4秒前
vampire发布了新的文献求助10
4秒前
ner完成签到,获得积分10
5秒前
5秒前
bigroll完成签到,获得积分10
5秒前
司空三毒发布了新的文献求助10
5秒前
6秒前
hhdr完成签到 ,获得积分10
6秒前
我是125完成签到,获得积分10
6秒前
华仔应助shelia采纳,获得10
6秒前
6秒前
Yaon-Xu发布了新的文献求助10
7秒前
龙之介发布了新的文献求助10
8秒前
orixero应助科研通管家采纳,获得10
8秒前
斯文败类应助科研通管家采纳,获得10
8秒前
斯文败类应助科研通管家采纳,获得10
8秒前
共享精神应助科研通管家采纳,获得10
8秒前
砺行应助科研通管家采纳,获得10
8秒前
8秒前
传奇3应助科研通管家采纳,获得10
8秒前
沉默伟宸应助科研通管家采纳,获得10
8秒前
顾矜应助科研通管家采纳,获得10
9秒前
情怀应助科研通管家采纳,获得10
9秒前
JamesPei应助科研通管家采纳,获得10
9秒前
完美世界应助科研通管家采纳,获得10
9秒前
我是老大应助科研通管家采纳,获得10
9秒前
CodeCraft应助科研通管家采纳,获得30
9秒前
砺行应助科研通管家采纳,获得10
9秒前
丘比特应助科研通管家采纳,获得10
9秒前
9秒前
丘比特应助科研通管家采纳,获得10
9秒前
ding应助科研通管家采纳,获得10
9秒前
砺行应助科研通管家采纳,获得10
9秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of pharmaceutical excipients, Ninth edition 5000
Aerospace Standards Index - 2026 ASIN2026 2000
Digital Twins of Advanced Materials Processing 2000
晋绥日报合订本24册(影印本1986年)【1940年9月–1949年5月】 1000
Social Cognition: Understanding People and Events 1000
Polymorphism and polytypism in crystals 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6032849
求助须知:如何正确求助?哪些是违规求助? 7723882
关于积分的说明 16201811
捐赠科研通 5179540
什么是DOI,文献DOI怎么找? 2771878
邀请新用户注册赠送积分活动 1755145
关于科研通互助平台的介绍 1640069