卷积(计算机科学)
计算机科学
核(代数)
计算
架空(工程)
代表(政治)
编码(集合论)
理论计算机科学
算法
人工智能
模式识别(心理学)
数学
离散数学
人工神经网络
程序设计语言
政治
法学
集合(抽象数据类型)
政治学
作者
Xuran Pan,Chunjiang Ge,Rui Lü,Shiji Song,Guan-Fu Chen,Zeyi Huang,Gao Huang
标识
DOI:10.1109/cvpr52688.2022.00089
摘要
Convolution and self-attention are two powerful techniques for representation learning, and they are usually considered as two peer approaches that are distinct from each other. In this paper, we show that there exists a strong underlying relation between them, in the sense that the bulk of computations of these two paradigms are in fact done with the same operation. Specifically, we first show that a traditional convolution with kernel size k × k can be decomposed into k 2 individual 1 × 1 convolutions, followed by shift and summation operations. Then, we interpret the projections of queries, keys, and values in self-attention module as multiple 1 × 1 convolutions, followed by the computation of attention weights and aggregation of the values. Therefore, the first stage of both two modules comprises the similar operation. More importantly, the first stage contributes a dominant computation complexity (square of the channel size) comparing to the second stage. This observation naturally leads to an elegant integration of these two seemingly distinct paradigms, i.e., a mixed model that enjoys the benefit of both self-Attention and Convolution (ACmix), while having minimum compu-tational overhead compared to the pure convolution or self-attention counterpart. Extensive experiments show that our model achieves consistently improved results over com-petitive baselines on image recognition and downstream tasks. Code and pre-trained models will be released at https://github.com/LeapLabTHU/ACmix and https://gitee.com/mindspore/models.
科研通智能强力驱动
Strongly Powered by AbleSci AI