计算机科学
修剪
卷积神经网络
人工智能
切片
深度学习
构造(python库)
机器学习
启发式
人工神经网络
移动设备
模式识别(心理学)
操作系统
万维网
生物
程序设计语言
农学
作者
Xinrui Jiang,Nannan Wang,Jingwei Xin,Xiaobo Xia,Xi Yang,Xinbo Gao
标识
DOI:10.1016/j.neunet.2021.08.002
摘要
Single image super-resolution (SISR) has achieved significant performance improvements due to the deep convolutional neural networks (CNN). However, the deep learning-based method is computationally intensive and memory demanding, which limit its practical deployment, especially for mobile devices. Focusing on this issue, in this paper, we present a novel approach to compress SR networks by weight pruning. To achieve this goal, firstly, we explore a progressive optimization method to gradually zero out the redundant parameters. Then, we construct a sparse-aware attention module by exploring a pruning-based well-suited attention strategy. Finally, we propose an information multi-slicing network which extracts and integrates multi-scale features at a granular level to acquire a more lightweight and accurate SR network. Extensive experiments reflect the pruning method could reduce the model size without a noticeable drop in performance, making it possible to apply the start-of-the-art SR models in the real-world applications. Furthermore, our proposed pruning versions could achieve better accuracy and visual improvements than state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI