过度拟合
人工智能
计算机科学
线性子空间
模式识别(心理学)
上下文图像分类
公制(单位)
子空间拓扑
机器学习
元学习(计算机科学)
集合(抽象数据类型)
样品(材料)
图像(数学)
数据挖掘
数学
人工神经网络
运营管理
化学
几何学
管理
色谱法
程序设计语言
经济
任务(项目管理)
作者
Linglong Tan,Fengzhi Wu,Xianmeng Meng
标识
DOI:10.1109/ccisp59915.2023.10355802
摘要
This paper focuses on metric-based few-shot learning, in which the model adjusts its parameters according to the model performance on the meta-tasks encountered during training to obtain a robust model parameter distribution, and the final model performance may lead to overfitting problems due to the small number of samples used to define the categories. Given such problems, a parameter adaptive adjustment mechanism is introduced in the paper to determine the main source of the current model error and to adjust the gradient backpropagation strength in this way to prevent overfitting problems due to sampling errors during the meta-learning training phase. In the paper, a few-shot image classification algorithm based on cross-attention subspace metric is proposed in the framework of existing few-shot image classification algorithms, using a cross-attention mechanism to correlate information between support set sample features and a query set sample features, and then project them into space separately. Different subspaces represent different categories, and the classification accuracy of the model can be improved by measuring the distance between the samples to be measured and the subspaces while increasing the distance between different subspaces to distinguish the samples of different categories. To verify the effectiveness of the method, the proposed method is validated on Mini-ImageNet and tiered-ImageNet, the common datasets for few-shot images, and compared with the classical few-shot image classification algorithm, and the experimental results show that the classification accuracy is improved on both datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI