质心
计算机科学
人工智能
遗忘
渐进式学习
机器学习
特征(语言学)
领域知识
相似性(几何)
人工神经网络
模式识别(心理学)
图像(数学)
语言学
哲学
作者
Zhiwei Deng,Chang Li,Rencheng Song,Xiang Liu,Ruobing Qian,Xun Chen
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2023-11-17
卷期号:73: 1-13
被引量:3
标识
DOI:10.1109/tim.2023.3334330
摘要
When building seizure prediction systems, the typical research scenario is patient-specific. In this scenario, the model is limited to performing well for individual patients and cannot acquire knowledge transferable to new patients to learn a set of universal parameters applicable to all patients. To this end, we investigate a new task scenario, domain incremental (DI) learning, which aims to build a unified epilepsy prediction system that performs well across patients by incrementally learning new patients. However, the neural network exhibits catastrophic forgetting (CF) due to abrupt shifts in domain distributions. The learned representations drift drastically during incremental training, which quickly forgets the knowledge learned from past tasks. To address this, we introduce the experience replay (ER) method, which stores a few samples from previous patients and then replays them in new patient training to facilitate episodic memory formation and consolidation. In addition, we propose a novel centroid-guided ER method (CGER) that computes the class centroid in the feature space using subsets stored in the memory buffer to provide semantic memory. The CGER regularizes incremental training by using cosine similarity to measure the distance between sample embeddings and class centroids, providing additional guidance for parameter updates. Experimental results demonstrate that the ER approach substantially reduces CF and significantly improves performance when combined with CG.
科研通智能强力驱动
Strongly Powered by AbleSci AI