计算机科学
核(代数)
卷积神经网络
参数化复杂度
可扩展性
缩放比例
人工智能
树核
变压器
对比度(视觉)
模式识别(心理学)
核方法
机器学习
支持向量机
算法
分布的核嵌入
数学
几何学
量子力学
电压
物理
组合数学
数据库
作者
Xiaohan Ding,Xiangyu Zhang,Yizhuang Zhou,Jungong Han,Guiguang Ding,Jian Sun
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:17
标识
DOI:10.48550/arxiv.2203.06717
摘要
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depth-wise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31x31, in contrast to commonly used 3x3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI