卷积(计算机科学)
失败
核(代数)
计算机科学
卷积神经网络
代表(政治)
算法
计算复杂性理论
人工智能
模式识别(心理学)
数学
人工神经网络
并行计算
离散数学
政治学
政治
法学
作者
Yinpeng Chen,Xiyang Dai,Mengchen Liu,Dongdong Chen,Lu Yuan,Zicheng Liu
标识
DOI:10.1109/cvpr42600.2020.01104
摘要
Light-weight convolutional neural networks (CNNs) suffer performance degradation as their low computational budgets constrain both the depth (number of convolution layers) and the width (number of channels) of CNNs, resulting in limited representation capability. To address this issue, we present Dynamic Convolution, a new design that increases model complexity without increasing the network depth or width. Instead of using a single convolution kernel per layer, dynamic convolution aggregates multiple parallel convolution kernels dynamically based upon their attentions, which are input dependent. Assembling multiple kernels is not only computationally efficient due to the small kernel size, but also has more representation power since these kernels are aggregated in a non-linear way via attention. By simply using dynamic convolution for the state-of-the-art architecture MobileNetV3-Small, the top-1 accuracy of ImageNet classification is boosted by 2.9% with only 4% additional FLOPs and 2.9 AP gain is achieved on COCO keypoint detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI