REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning

遗忘 计算机科学 熵(时间箭头) 稳健性(进化) 算法 人工智能 膨胀的 失败 修剪 数学 生物 材料科学 化学 并行计算 复合材料 哲学 物理 基因 量子力学 生物化学 语言学 抗压强度 农学
作者
Xin Zhang,Weiying Xie,Yunsong Li,Kai Jiang,Leyuan Fang
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 3912-3923 被引量:6
标识
DOI:10.1109/tip.2023.3288986
摘要

Neurologically, filter pruning is a procedure of forgetting and remembering recovering. Prevailing methods directly forget less important information from an unrobust baseline at first and expect to minimize the performance sacrifice. However, unsaturated base remembering imposes a ceiling on the slimmed model leading to suboptimal performance. And significantly forgetting at first would cause unrecoverable information loss. Here, we design a novel filter pruning paradigm termed Remembering Enhancement and Entropy-based Asymptotic Forgetting (REAF). Inspired by robustness theory, we first enhance remembering by over-parameterizing baseline with fusible compensatory convolutions which liberates pruned model from the bondage of baseline at no inference cost. Then the collateral implication between original and compensatory filters necessitates a bilateral-collaborated pruning criterion. Specifically, only when the filter has the largest intra-branch distance and its compensatory counterpart has the strongest remembering enhancement power, they are preserved. Further, Ebbinghaus curve-based asymptotic forgetting is proposed to protect the pruned model from unstable learning. The number of pruned filters is increasing asymptotically in the training procedure, which enables the remembering of pretrained weights gradually to be concentrated in the remaining filters. Extensive experiments demonstrate the superiority of REAF over many state-of-the-art (SOTA) methods. For example, REAF removes 47.55% FLOPs and 42.98% parameters of ResNet-50 only with 0.98% TOP-1 accuracy loss on ImageNet. The code is available at https://github.com/zhangxin-xd/REAF.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
1秒前
xiaoni完成签到,获得积分10
1秒前
在水一方应助chen采纳,获得10
1秒前
科研通AI6.4应助betterlouse采纳,获得10
2秒前
tiko完成签到,获得积分10
2秒前
2秒前
上官若男应助是清清子z耶采纳,获得10
2秒前
殷勤的紫槐应助卤味狮子头采纳,获得200
2秒前
einspringen发布了新的文献求助10
3秒前
空白完成签到,获得积分10
3秒前
3秒前
田様应助chen采纳,获得10
4秒前
hh发布了新的文献求助10
4秒前
来了发布了新的文献求助10
4秒前
11发布了新的文献求助10
5秒前
科研通AI6.1应助Kkxx采纳,获得10
5秒前
6秒前
yulong发布了新的文献求助10
6秒前
栀染完成签到,获得积分10
6秒前
7秒前
7秒前
成就的南霜完成签到,获得积分10
7秒前
烟花应助阔达黎云采纳,获得10
8秒前
8秒前
8秒前
8秒前
完美世界应助wuhao采纳,获得10
9秒前
10秒前
nihao世界发布了新的文献求助10
11秒前
打打应助cnin采纳,获得10
12秒前
12秒前
落后的尔冬完成签到,获得积分10
12秒前
12秒前
iMoney发布了新的文献求助30
13秒前
科研小白发布了新的文献求助10
13秒前
咖啡豆发布了新的文献求助10
13秒前
14秒前
14秒前
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Picture this! Including first nations fiction picture books in school library collections 2000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
ON THE THEORY OF BIRATIONAL BLOWING-UP 666
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6391646
求助须知:如何正确求助?哪些是违规求助? 8207042
关于积分的说明 17371721
捐赠科研通 5445303
什么是DOI,文献DOI怎么找? 2878864
邀请新用户注册赠送积分活动 1855331
关于科研通互助平台的介绍 1698531