计算机科学
元学习(计算机科学)
人工智能
在线机器学习
机器学习
随机梯度下降算法
梯度下降
基于实例的学习
刮擦
人工神经网络
学习分类器系统
主动学习(机器学习)
操作系统
经济
管理
任务(项目管理)
作者
Sepp Hochreiter,A. Steven Younger,Peter R. Conwell
标识
DOI:10.1007/3-540-44668-0_13
摘要
This paper introduces the application of gradient descent methods to meta-learning. The concept of “meta-learning”, i.e. of a system that improves or discovers a learning algorithm, has been of interest in machine learning for decades because of its appealing applications. Previous meta-learning approaches have been based on evolutionary methods and, therefore, have been restricted to small models with few free parameters. We make meta-learning in large systems feasible by using recurrent neural networks with their attendant learning routines as meta-learning systems. Our system derived complex well performing learning algorithms from scratch. In this paper we also show that our approach performs non-stationary time series prediction.
科研通智能强力驱动
Strongly Powered by AbleSci AI