失败
卷积(计算机科学)
卷积神经网络
计算机科学
计算
可扩展性
分割
人工智能
任务(项目管理)
上下文图像分类
核(代数)
模式识别(心理学)
算法
图像(数学)
人工神经网络
并行计算
数学
工程类
组合数学
数据库
系统工程
作者
Yikang Zhang,Jian Zhang,Qiang Wang,Zhao Zhong
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:10
标识
DOI:10.48550/arxiv.2004.10694
摘要
Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels. To address this issue, we propose a novel dynamic convolution method to adaptively generate convolution kernels based on image contents. To demonstrate the effectiveness, we apply dynamic convolution on multiple state-of-the-art CNNs. On one hand, we can reduce the computation cost remarkably while maintaining the performance. For ShuffleNetV2/MobileNetV2/ResNet18/ResNet50, DyNet can reduce 37.0/54.7/67.2/71.3% FLOPs without loss of accuracy. On the other hand, the performance can be largely boosted if the computation cost is maintained. Based on the architecture MobileNetV3-Small/Large, DyNet achieves 70.3/77.1% Top-1 accuracy on ImageNet with an improvement of 2.9/1.9%. To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduce 69.3% FLOPs while maintaining Mean IoU on segmentation task.
科研通智能强力驱动
Strongly Powered by AbleSci AI