亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Focal Self-attention for Local-Global Interactions in Vision Transformers

心理学 计算机科学 人工智能 认知科学
作者
Jianwei Yang,Chunyuan Li,Pengchuan Zhang,Xiyang Dai,Bin Xiao,Yuan Liu,Jianfeng Gao
出处
期刊:Cornell University - arXiv 被引量:143
标识
DOI:10.48550/arxiv.2107.00641
摘要

Recently, Vision Transformer and its variants have shown great promise on various computer vision tasks. The ability of capturing short- and long-range visual dependencies through self-attention is arguably the main source for the success. But it also brings challenges due to quadratic computational overhead, especially for the high-resolution vision tasks (e.g., object detection). In this paper, we present focal self-attention, a new mechanism that incorporates both fine-grained local and coarse-grained global interactions. Using this new mechanism, each token attends the closest surrounding tokens at fine granularity but the tokens far away at coarse granularity, and thus can capture both short- and long-range visual dependencies efficiently and effectively. With focal self-attention, we propose a new variant of Vision Transformer models, called Focal Transformer, which achieves superior performance over the state-of-the-art vision Transformers on a range of public image classification and object detection benchmarks. In particular, our Focal Transformer models with a moderate size of 51.1M and a larger size of 89.8M achieve 83.5 and 83.8 Top-1 accuracy, respectively, on ImageNet classification at 224x224 resolution. Using Focal Transformers as the backbones, we obtain consistent and substantial improvements over the current state-of-the-art Swin Transformers for 6 different object detection methods trained with standard 1x and 3x schedules. Our largest Focal Transformer yields 58.7/58.9 box mAPs and 50.9/51.3 mask mAPs on COCO mini-val/test-dev, and 55.4 mIoU on ADE20K for semantic segmentation, creating new SoTA on three of the most challenging computer vision tasks.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
脑洞疼应助Nian采纳,获得10
1秒前
华仔应助西瓜番茄采纳,获得10
2秒前
3秒前
7秒前
王小树完成签到,获得积分10
11秒前
12秒前
西瓜番茄发布了新的文献求助10
18秒前
allover完成签到,获得积分10
24秒前
31秒前
33秒前
大个应助yeyanli采纳,获得10
35秒前
f0rest发布了新的文献求助10
35秒前
二舅司机发布了新的文献求助10
40秒前
完美世界应助科研通管家采纳,获得10
42秒前
张欢馨应助科研通管家采纳,获得30
42秒前
42秒前
Wingkay完成签到 ,获得积分10
50秒前
清秀面包发布了新的文献求助10
51秒前
1分钟前
大模型应助秋下采纳,获得10
1分钟前
飞龙发布了新的文献求助10
1分钟前
赘婿应助argon采纳,获得10
1分钟前
科研通AI6.2应助清秀面包采纳,获得10
1分钟前
bkagyin应助西瓜番茄采纳,获得10
1分钟前
可爱的函函应助飞龙采纳,获得10
1分钟前
飞龙完成签到,获得积分10
1分钟前
1分钟前
1分钟前
Nian发布了新的文献求助10
1分钟前
颜九发布了新的文献求助10
1分钟前
LJC完成签到,获得积分10
1分钟前
科研通AI6.3应助俞俊敏采纳,获得10
1分钟前
2分钟前
颜九完成签到,获得积分10
2分钟前
俞俊敏发布了新的文献求助10
2分钟前
科研通AI6.2应助Nian采纳,获得10
2分钟前
orixero应助缥缈采纳,获得10
2分钟前
2分钟前
CodeCraft应助科研通管家采纳,获得10
2分钟前
SciGPT应助科研通管家采纳,获得10
2分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
PowerCascade: A Synthetic Dataset for Cascading Failure Analysis in Power Systems 2000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1500
Picture this! Including first nations fiction picture books in school library collections 1500
Signals, Systems, and Signal Processing 610
Unlocking Chemical Thinking: Reimagining Chemistry Teaching and Learning 555
Scientific Writing and Communication: Papers, Proposals, and Presentations 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6371605
求助须知:如何正确求助?哪些是违规求助? 8185245
关于积分的说明 17271304
捐赠科研通 5426013
什么是DOI,文献DOI怎么找? 2870525
邀请新用户注册赠送积分活动 1847432
关于科研通互助平台的介绍 1694042