计算机科学
元学习(计算机科学)
学习迁移
人工智能
感应转移
机器学习
任务(项目管理)
多任务学习
人工神经网络
提取器
班级(哲学)
特征(语言学)
机器人学习
工程类
管理
经济
哲学
移动机器人
机器人
语言学
工艺工程
作者
Yan Chu,Xianghui Sun,Jiang Songhao,Tianwen Xie,Zhengkui Wang,Wen Shan
标识
DOI:10.1007/978-3-031-44198-1_30
摘要
Few-shot learning is a challenging task that aims to learn to adapt to new tasks with only a few labeled samples. Meta-learning is a promising approach to address this challenge, but the learned meta-knowledge on training sets may not always be useful due to class imbalance, task imbalance, and distribution imbalance. In this paper, we propose a novel few-shot learning method based on meta-transfer learning, which is called Meta-Transfer Task-Adaptive Meta-Learning (MT-TAML). Meta-transfer learning is used to transfer the weight parameters of a pre-trained deep neural network, which makes up for the deficiency of using shallow networks as the feature extractor. To address the imbalance problem in realistic few-shot learning scenarios, we introduce a learnable parameter balance meta-knowledge for each task. Additionally, we propose a novel task training strategy that selects the difficult class in each task and re-samples from it to form the difficult task, thereby improving the model's accuracy. Our experimental results show that MT-TAML outperforms existing few-shot learning methods by 2–4%. Furthermore, our ablation experiments confirm the effectiveness of the combination of meta-transfer learning and learnable equilibrium parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI