REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning

遗忘 计算机科学 熵(时间箭头) 稳健性(进化) 算法 人工智能 膨胀的 失败 修剪 数学 基因 复合材料 农学 生物 并行计算 材料科学 抗压强度 化学 生物化学 语言学 哲学 量子力学 物理
作者
Xin Zhang,Weiying Xie,Yunsong Li,Kai Jiang,Leyuan Fang
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 3912-3923 被引量:6
标识
DOI:10.1109/tip.2023.3288986
摘要

Neurologically, filter pruning is a procedure of forgetting and remembering recovering. Prevailing methods directly forget less important information from an unrobust baseline at first and expect to minimize the performance sacrifice. However, unsaturated base remembering imposes a ceiling on the slimmed model leading to suboptimal performance. And significantly forgetting at first would cause unrecoverable information loss. Here, we design a novel filter pruning paradigm termed Remembering Enhancement and Entropy-based Asymptotic Forgetting (REAF). Inspired by robustness theory, we first enhance remembering by over-parameterizing baseline with fusible compensatory convolutions which liberates pruned model from the bondage of baseline at no inference cost. Then the collateral implication between original and compensatory filters necessitates a bilateral-collaborated pruning criterion. Specifically, only when the filter has the largest intra-branch distance and its compensatory counterpart has the strongest remembering enhancement power, they are preserved. Further, Ebbinghaus curve-based asymptotic forgetting is proposed to protect the pruned model from unstable learning. The number of pruned filters is increasing asymptotically in the training procedure, which enables the remembering of pretrained weights gradually to be concentrated in the remaining filters. Extensive experiments demonstrate the superiority of REAF over many state-of-the-art (SOTA) methods. For example, REAF removes 47.55% FLOPs and 42.98% parameters of ResNet-50 only with 0.98% TOP-1 accuracy loss on ImageNet. The code is available at https://github.com/zhangxin-xd/REAF.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
zeng完成签到,获得积分10
刚刚
桃铁完成签到,获得积分10
刚刚
龙华之士发布了新的文献求助10
1秒前
1秒前
孔雀翎发布了新的文献求助10
1秒前
2秒前
2秒前
3秒前
捉住月亮发布了新的文献求助10
3秒前
bkagyin应助holen采纳,获得10
3秒前
3秒前
lay发布了新的文献求助10
4秒前
xjr发布了新的文献求助10
4秒前
4秒前
5秒前
5秒前
beibei完成签到,获得积分10
5秒前
灵犀应助考研小白采纳,获得20
5秒前
5秒前
半岛发布了新的文献求助10
6秒前
英俊的铭应助可爱的觅夏采纳,获得10
6秒前
吃肯德基发布了新的文献求助10
6秒前
bkagyin应助胖丹采纳,获得10
7秒前
zoey发布了新的文献求助10
7秒前
黑暗系发布了新的文献求助10
7秒前
超帅飞松完成签到,获得积分10
8秒前
鲤鱼储发布了新的文献求助10
9秒前
科目三应助dashuaib采纳,获得10
9秒前
GYPP发布了新的文献求助10
9秒前
orixero应助xlao采纳,获得10
10秒前
jonathan发布了新的文献求助10
10秒前
10秒前
10秒前
Akim应助xiao123789采纳,获得10
10秒前
11秒前
叶岐峰发布了新的文献求助10
11秒前
11秒前
飞燕完成签到 ,获得积分10
12秒前
苏卿应助多么完美的一天采纳,获得10
12秒前
苏卿应助多么完美的一天采纳,获得10
12秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Mechanistic Modeling of Gas-Liquid Two-Phase Flow in Pipes 2500
Kelsen’s Legacy: Legal Normativity, International Law and Democracy 1000
Interest Rate Modeling. Volume 3: Products and Risk Management 600
Interest Rate Modeling. Volume 2: Term Structure Models 600
Virulence Mechanisms of Plant-Pathogenic Bacteria 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3543046
求助须知:如何正确求助?哪些是违规求助? 3120471
关于积分的说明 9342549
捐赠科研通 2818520
什么是DOI,文献DOI怎么找? 1549595
邀请新用户注册赠送积分活动 722196
科研通“疑难数据库(出版商)”最低求助积分说明 713049