计算机科学
人工智能
元学习(计算机科学)
机器学习
理论(学习稳定性)
一般化
平滑的
弹丸
质量(理念)
图像(数学)
模式识别(心理学)
计算机视觉
数学
哲学
经济
数学分析
有机化学
化学
管理
认识论
任务(项目管理)
作者
Xiaomeng Zhu,Shuxiao Li
标识
DOI:10.1016/j.neucom.2022.10.012
摘要
At present, image classification covers more and more fields, and it is often difficult to obtain enough data for learning in some specific scenarios, such as medical fields, personalized customization of robots, etc. Few-shot image classification aims to quickly learn new classes of features from few images, and the meta-learning method has become the mainstream due to its good performance. However, the generalization ability of the meta-learning method is still poor and easy to be disturbed by low-quality images. In order to solve the above problems, this paper proposes Momentum Group Meta-Learning (MGML) to achieve a better effect of few-shot learning, which contains Group Meta-Learning module (GML) and Adaptive Momentum Smoothing module (AMS). GML obtains an ensemble model by training multiple episodes in parallel and then grouping them, which can reduce the interference of low-quality samples and improve the stability of meta-learning training. AMS includes the adaptive momentum update rule to further optimally integrate models between different groups, so that the model can memorize experience in more scenarios and enhance the generalization ability. We conduct experiments on miniImageNet and tieredImageNet datasets. The results show that MGML improves the accuracy, stability and cross-domain transfer ability of few-shot image classification, and can be applied to different few-shot learning models.
科研通智能强力驱动
Strongly Powered by AbleSci AI