计算机科学
人工智能
人工神经网络
机器学习
学习迁移
渐进式学习
特征(语言学)
蒸馏
编码(集合论)
哲学
语言学
化学
有机化学
集合(抽象数据类型)
程序设计语言
作者
Songsong Tian,Weijun Li,Xin Ning,Hang Ran,Hong Qin,Prayag Tiwari
出处
期刊:Neurocomputing
[Elsevier]
日期:2023-05-13
卷期号:545: 126300-126300
被引量:40
标识
DOI:10.1016/j.neucom.2023.126300
摘要
The incremental learning paradigm in machine learning has consistently been a focus of academic research. It is similar to the way in which biological systems learn, and reduces energy consumption by avoiding excessive retraining. Existing studies utilize the powerful feature extraction capabilities of pre-trained models to address incremental learning, but there remains a problem of insufficient utilization of neural network feature knowledge. To address this issue, this paper proposes a novel method called Pre-trained Model Knowledge Distillation (PMKD) which combines knowledge distillation of neural network representations and replay. This paper designs a loss function based on centered kernel alignment to transfer neural network representations knowledge from the pre-trained model to the incremental model layer-by-layer. Additionally, the use of memory buffer for Dark Experience Replay helps the model retain past knowledge better. Experiments show that PMKD achieved superior performance on various datasets and different buffer sizes. Compared to other methods, our class incremental learning accuracy reached the best performance. The open-source code is published athttps://github.com/TianSongS/PMKD-IL.
科研通智能强力驱动
Strongly Powered by AbleSci AI