计算机科学
人工智能
变压器
规范化(社会学)
分割
缩放比例
计算机视觉
模式识别(心理学)
电压
工程类
几何学
人类学
数学
电气工程
社会学
作者
Ze Liu,Han Hu,Yutong Lin,Zhuliang Yao,Zhenda Xie,Yixuan Wei,Ning Jia,Yue Cao,Zheng Zhang,Li Dong,Furu Wei,Baining Guo
标识
DOI:10.1109/cvpr52688.2022.01170
摘要
We present techniques for scaling Swin Transformer [35] up to 3 billion parameters and making it capable of training with images of up to 1,536x1,536 resolution. By scaling up capacity and resolution, Swin Transformer sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet- V2 image classification, 63.1 / 54.4 box / mask mAP on COCO object detection, 59.9 mIoU on ADE20K semantic segmentation, and 86.8% top-1 accuracy on Kinetics-400 video action classification. We tackle issues of training instability, and study how to effectively transfer models pre-trained at low resolutions to higher resolution ones. To this aim, several novel technologies are proposed: 1) a residual post normalization technique and a scaled cosine attention approach to improve the stability of large vision models; 2) a log-spaced continuous position bias technique to effectively transfer models pre-trained at low-resolution images and windows to their higher-resolution counterparts. In addition, we share our crucial implementation details that lead to significant savings of GPU memory consumption and thus make it feasi-ble to train large vision models with regular GPUs. Using these techniques and self-supervised pre-training, we suc-cessfully train a strong 3 billion Swin Transformer model and effectively transfer it to various vision tasks involving high-resolution images or windows, achieving the state-of-the-art accuracy on a variety of benchmarks. Code is avail-able at https://github.com/microsoft/Swin-Transformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI