元学习(计算机科学)
计算机科学
人工智能
强化学习
任务(项目管理)
机器学习
主动学习(机器学习)
初始化
经济
管理
程序设计语言
作者
Zhenguo Li,Fengwei Zhou,Fei Chen,Hang Li
出处
期刊:Cornell University - arXiv
日期:2017-01-01
被引量:852
标识
DOI:10.48550/arxiv.1707.09835
摘要
Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI