计算机科学
建筑
变压器
人工智能
联营
卷积神经网络
稳健性(进化)
计算机工程
计算机视觉
模式识别(心理学)
工程类
电压
电气工程
艺术
视觉艺术
基因
生物化学
化学
作者
Byeongho Heo,Sangdoo Yun,Dongyoon Han,Sanghyuk Chun,Junsuk Choe,Seong Joon Oh
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:37
标识
DOI:10.48550/arxiv.2103.16302
摘要
Vision Transformer (ViT) extends the application range of transformers from language processing to computer vision tasks as being an alternative architecture against the existing convolutional neural networks (CNN). Since the transformer-based architecture has been innovative for computer vision modeling, the design convention towards an effective architecture has been less studied yet. From the successful design principles of CNN, we investigate the role of spatial dimension conversion and its effectiveness on transformer-based architecture. We particularly attend to the dimension reduction principle of CNNs; as the depth increases, a conventional CNN increases channel dimension and decreases spatial dimensions. We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model. We show that PiT achieves the improved model capability and generalization performance against ViT. Throughout the extensive experiments, we further show PiT outperforms the baseline on several tasks such as image classification, object detection, and robustness evaluation. Source codes and ImageNet models are available at https://github.com/naver-ai/pit
科研通智能强力驱动
Strongly Powered by AbleSci AI