MNIST数据库
失败
推论
计算机科学
冗余(工程)
修剪
卷积神经网络
人工智能
人工神经网络
模式识别(心理学)
特征(语言学)
深度学习
深层神经网络
机器学习
哲学
并行计算
操作系统
生物
语言学
农学
作者
Babajide O. Ayinde,Tamer Inanc,Jacek M. Żurada
标识
DOI:10.1016/j.neunet.2019.04.021
摘要
This paper presents an efficient technique to reduce the inference cost of deep and/or wide convolutional neural network models by pruning redundant features (or filters). Previous studies have shown that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one another or are very similar and show little or no variations, thus resulting in filtering redundancy. We propose to prune these redundant features along with their related feature maps according to their relative cosine distances in the feature space, thus leading to smaller networks with reduced post-training inference computational costs and competitive performance. We empirically show on select models (VGG-16, ResNet-56, ResNet-110, and ResNet-34) and dataset (MNIST Handwritten digits, CIFAR-10, and ImageNet) that inference costs (in FLOPS) can be significantly reduced while overall performance is still competitive with the state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI