变压器
卷积神经网络
计算机科学
失败
人工智能
人工神经网络
机器学习
模式识别(心理学)
并行计算
电压
工程类
电气工程
作者
Jianyuan Guo,Kai Han,Han Wu,Chang Xu,Yehui Tang,Chunjing Xu,Yunhe Wang
出处
期刊:Cornell University - arXiv
日期:2021-07-13
被引量:10
标识
DOI:10.48550/arxiv.2107.06263
摘要
Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image. However, there are still gaps in both performance and computational cost between transformers and existing convolutional neural networks (CNNs). In this paper, we aim to address this issue and develop a network that can outperform not only the canonical transformers, but also the high-performance convolutional models. We propose a new transformer based hybrid network by taking advantage of transformers to capture long-range dependencies, and of CNNs to model local features. Furthermore, we scale it to obtain a family of models, called CMTs, obtaining much better accuracy and efficiency than previous convolution and transformer based models. In particular, our CMT-S achieves 83.5% top-1 accuracy on ImageNet, while being 14x and 2x smaller on FLOPs than the existing DeiT and EfficientNet, respectively. The proposed CMT-S also generalizes well on CIFAR10 (99.2%), CIFAR100 (91.7%), Flowers (98.7%), and other challenging vision datasets such as COCO (44.3% mAP), with considerably less computational cost.
科研通智能强力驱动
Strongly Powered by AbleSci AI