计算机科学
安全性令牌
计算
变压器
人工智能
计算机工程
模式识别(心理学)
算法
计算机网络
工程类
电压
电气工程
作者
Sucheng Ren,Daquan Zhou,Shengfeng He,Jiashi Feng,Xinchao Wang
标识
DOI:10.1109/cvpr52688.2022.01058
摘要
Recent Vision Transformer (ViT) models have demonstrated encouraging results across various computer vision tasks, thanks to its competence in modeling long-range de-pendencies of image patches or tokens via self-attention. These models, however, usually designate the similar receptive fields of each token feature within each layer. Such a constraint inevitably limits the ability of each self-attention layer in capturing multi-scale features, thereby leading to performance degradation in handling images with multiple objects of different scales. To address this issue, we propose a novel and generic strategy, termed shunted self-attention (SSA), that allows ViTs to model the attentions at hybrid scales per attention layer. The key idea of SSA is to inject heterogeneous receptive field sizes into tokens: before computing the self-attention matrix, it selectively merges tokens to represent larger object features while keeping certain tokens to preserve fine-grained features. This novel merging scheme enables the self-attention to learn relationships between objects with different sizes, and simultaneously reduces the token numbers and the computational cost. Extensive experiments across various tasks demonstrate the superiority of SSA. Specifically, the SSA-based transformer achieve 84.0% Top-1 accuracy and out-performs the state-of-the-art Focal Transformer on Ima-geNet with only half of the model size and computation cost, and surpasses Focal Transformer by 1.3 mAP on COCO and 2.9 mIOU on ADE20K under similar parameter and computation cost. Code has been released at https://github.com/OliverRensulShunted-Transformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI