计算机科学
深度学习
修剪
人工智能
人工神经网络
机器学习
过程(计算)
软件部署
一致性(知识库)
深层神经网络
推论
软件工程
农学
生物
操作系统
作者
Zhiyu Zhu,Huaming Chen,Zhibo Jin,Xinyi Wang,J. Z. Zhang,Minhui Xue,Qinghua Lu,Jun Shen,Kim‐Kwang Raymond Choo
标识
DOI:10.1145/3583780.3614889
摘要
The rapid development of deep learning has demonstrated its potential for deployment in many intelligent service systems. However, some issues such as optimisation (e.g., how to reduce the deployment resources costs and further improve the detection speed), especially in scenarios where limited resources are available, remain challenging to address. In this paper, we aim to delve into the principles of deep neural networks, focusing on the importance of network neurons. The goal is to identify the neurons that exert minimal impact on model performances, thereby aiding in the process of model pruning. In this work, we have thoroughly considered the deep learning model pruning process with and without fine-tuning step, ensuring the model performance consistency. To achieve our objectives, we propose a methodology that employs adversarial attack methods to explore deep neural network parameters. This approach is combined with an innovative attribution algorithm to analyse the level of network neurons involvement. In our experiments, our approach can effectively quantify the importance of network neuron. We extend the evaluation through comprehensive experiments conducted on a range of datasets, including CIFAR-10, CIFAR-100 and Caltech101. The results demonstrate that, our method have consistently achieved the state-of-the-art performance over many existing methods. We anticipate that this work will help to reduce the heavy training and inference cost of deep neural network models where a lightweight deep learning enhanced service and system is possible. The source code is open source at https://github.com/LMBTough/FVW.
科研通智能强力驱动
Strongly Powered by AbleSci AI