计算机科学
人工智能
渐进式学习
深度学习
机器学习
作者
Justin Leo,Jugal Kalita
出处
期刊:Neurocomputing
[Elsevier]
日期:2024-03-13
卷期号:582: 127545-127545
被引量:2
标识
DOI:10.1016/j.neucom.2024.127545
摘要
Neural networks and deep learning algorithms are designed to function similarly to biological synaptic structures. However, classical deep learning algorithms fail to fully capture the need for continuous learning; this has led to the advent of incremental learning. Incremental learning adds new challenges that are handled differently by modern state-of-the-art approaches. Some of these include: utilization of network memory as additional knowledge increases the size of the network, open-set recognition to be able to identify unrecognized information, and efficient knowledge distillation as most incremental learning algorithms are prone to catastrophic forgetting of previously learned knowledge. Recent advancements achieve incremental learning through a multitude of methods. Most methods are characterized by augmenting the normal algorithm of neural network training by both directly modifying the neural network structure and by adding additional learning steps. This paper analyzes and provides a comprehensive survey of existing methods and various techniques used for incremental learning. A novel categorization of the methods is also introduced based on recent trends of the state-of-the-art solutions. The study focuses on methods that provide incremental learning success as well as discusses emerging patterns in new research.
科研通智能强力驱动
Strongly Powered by AbleSci AI