计算机科学
人工智能
机器学习
班级(哲学)
判别式
嵌入
作者
Xuan Wang,Zhong Ji,Yunlong Yu,Yanwei Pang,Jungong Han
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:33: 4419-4431
标识
DOI:10.1109/tip.2024.3434475
摘要
Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning new knowledge from limited training examples without forgetting previous knowledge. However, we observe that existing methods face a challenge known as supervision collapse, where the model disproportionately emphasizes class-specific features of base classes at the detriment of novel class representations, leading to restricted cognitive capabilities. To alleviate this issue, we propose a new framework, Model aTtention Expansion for Few-Shot Class-Incremental Learning (MTE-FSCIL), aimed at expanding the model attention fields to improve transferability without compromising the discriminative capability for base classes. Specifically, the framework adopts a dual-stage training strategy, comprising pre-training and meta-training stages. In the pre-training stage, we present a new regularization technique, named the Reserver (RS) loss, to expand the global perception and reduce over-reliance on class-specific features by amplifying feature map activations. During the meta-training stage, we introduce the Repeller (RP) loss, a novel pair-based loss that promotes variation in representations and improves the model's recognition of sample uniqueness by scattering intra-class samples within the embedding space. Furthermore, we propose a Transformational Adaptation (TA) strategy to enable continuous incorporation of new knowledge from downstream tasks, thus facilitating cross-task knowledge transfer. Extensive experimental results on mini-ImageNet, CIFAR100, and CUB200 datasets demonstrate that our proposed framework consistently outperforms the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI