失败
计算机科学
核(代数)
卷积(计算机科学)
并行计算
计算机工程
绩效改进
领域(数学)
编码(集合论)
维数(图论)
算法
人工智能
数学
运营管理
集合(抽象数据类型)
组合数学
人工神经网络
纯数学
经济
程序设计语言
作者
Weihao Yu,Pan Zhou,Shuicheng Yan,Xinchao Wang
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:29
标识
DOI:10.48550/arxiv.2303.16900
摘要
Inspired by the long-range modeling ability of ViTs, large-kernel convolutions are widely studied and adopted recently to enlarge the receptive field and improve model performance, like the remarkable work ConvNeXt which employs 7x7 depthwise convolution. Although such depthwise operator only consumes a few FLOPs, it largely harms the model efficiency on powerful computing devices due to the high memory access costs. For example, ConvNeXt-T has similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained on A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt can improve speed, it results in significant performance degradation. It is still unclear how to speed up large-kernel-based CNN models while preserving their performance. To tackle this issue, inspired by Inceptions, we propose to decompose large-kernel depthwise convolution into four parallel branches along channel dimension, i.e. small square kernel, two orthogonal band kernels, and an identity mapping. With this new Inception depthwise convolution, we build a series of networks, namely IncepitonNeXt, which not only enjoy high throughputs but also maintain competitive performance. For instance, InceptionNeXt-T achieves 1.6x higher training throughputs than ConvNeX-T, as well as attains 0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can serve as an economical baseline for future architecture design to reduce carbon footprint. Code is available at https://github.com/sail-sg/inceptionnext.
科研通智能强力驱动
Strongly Powered by AbleSci AI