参数化复杂度
计算机科学
推论
卷积神经网络
卷积(计算机科学)
人工智能
编码(集合论)
卷积码
图层(电子)
机器学习
理论计算机科学
算法
人工神经网络
程序设计语言
解码方法
化学
集合(抽象数据类型)
有机化学
作者
Brandon Yang,Gabriel Bender,Quoc V. Le,Jiquan Ngiam
出处
期刊:Neural Information Processing Systems
日期:2019-01-01
卷期号:32: 1307-1318
被引量:138
摘要
Convolutional layers are one of the basic building blocks of modern deep neural networks. One fundamental assumption is that convolutional kernels should be shared for all examples in a dataset. We propose conditionally parameterized convolutions (CondConv), which learn specialized convolutional kernels for each example. Replacing normal convolutions with CondConv enables us to increase the size and capacity of a network, while maintaining efficient inference. We demonstrate that scaling networks with CondConv improves the performance and inference cost trade-off of several existing convolutional neural network architectures on both classification and detection tasks. On ImageNet classification, our CondConv approach applied to EfficientNet-B0 achieves state-ofthe-art performance of 78.3% accuracy with only 413M multiply-adds. Code and checkpoints for the CondConv Tensorflow layer and CondConv-EfficientNet models are available at: https://github.com/tensorflow/tpu/tree/master/ models/official/efficientnet/condconv.
科研通智能强力驱动
Strongly Powered by AbleSci AI