初始化
计算机科学
过度拟合
判别式
一般化
人工智能
机器学习
特征(语言学)
模式识别(心理学)
数学
人工神经网络
语言学
数学分析
哲学
程序设计语言
作者
Yuanjie Shao,Wenxiao Wu,Xinge You,Changxin Gao,Nong Sang
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-12-27
卷期号:33 (7): 3284-3295
被引量:18
标识
DOI:10.1109/tcsvt.2022.3232717
摘要
Few-shot classification (FSC), which aims to identify novel classes in the presence of a few labeled samples, has drawn vast attention in recent years. One of the representative few-shot classification methods is model-agnostic meta-learning (MAML), which focuses on learning an initialization that can quickly adapt to novel categories with a few annotated samples. However, due to insufficient samples, MAML can easily fall into the dilemma of overfitting. Most existing MAML-based methods either improve the inner-loop update rule to achieve better generalization or constrain the outer-loop optimization to learn a more desirable initialization, without considering improving the two optimization processes jointly, resulting in unsatisfactory performance. In this paper, we propose a bi-level constrained MAML (BLC-MAML) method for few-shot classification. Specifically, in the inner-loop optimization, we introduce a supervised contrastive loss to constrain the adaptation procedure, which can effectively increase the intra-class aggregation and inter-class separability, thus improving the generalization of the adapted model. In the case of the outer loop, we propose a cross-task metric (CTM) loss to constrain the adapted model to perform well on the different few-shot task. The CTM loss can enforce the adapted model to learn more discriminative and generalized feature representations, further boosting the generalization of the learned initialization. By simultaneously constraining the bi-level optimization procedure, the proposed BLC-MAML can learn an initialization with better generalization. Extensive experiments on several FSC benchmarks show that our method can effectively improve the performance of MAML under both the within-domain and cross-domain settings, and also perform favorably against the state-of-the-art FSC algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI