元学习(计算机科学)
计算机科学
学习迁移
初始化
一般化
人工智能
机器学习
任务(项目管理)
班级(哲学)
学会学习
多任务学习
点(几何)
数学
数学教育
经济
数学分析
管理
程序设计语言
几何学
作者
Amir Erfan Eshratifar,Mohammad Saeed Abrishami,David Eigen,Massoud Pedram
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2019-07-17
卷期号:33 (01): 9937-9938
被引量:2
标识
DOI:10.1609/aaai.v33i01.33019937
摘要
Transfer-learning and meta-learning are two effective methods to apply knowledge learned from large data sources to new tasks. In few-class, few-shot target task settings (i.e. when there are only a few classes and training examples available in the target task), meta-learning approaches that optimize for future task learning have outperformed the typical transfer approach of initializing model weights from a pretrained starting point. But as we experimentally show, metalearning algorithms that work well in the few-class setting do not generalize well in many-shot and many-class cases. In this paper, we propose a joint training approach that combines both transfer-learning and meta-learning. Benefiting from the advantages of each, our method obtains improved generalization performance on unseen target tasks in both few- and many-class and few- and many-shot scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI