计算机科学
强化学习
人工智能
元学习(计算机科学)
机器学习
在线机器学习
理论(学习稳定性)
主动学习(机器学习)
过程(计算)
任务(项目管理)
经济
管理
操作系统
作者
Zhixiong Xu,Weidong Zhang,Ailin Li,Feifei Zhao,Yuanyuan Jing,Zheng Wan,Lei Cao,Xiliang Chen
标识
DOI:10.1093/comjnl/bxad089
摘要
Abstract Meta-learning is a pivotal and potentially influential machine learning approach to solve challenging problems in reinforcement learning. However, the costly hyper-parameter tuning for training stability of meta-learning is a known shortcoming and currently a hotspot of research. This paper addresses this shortcoming by introducing an online and easily trainable hyper-parameter optimization approach, called Meta Parameters Learning via Meta-Learning (MPML), to combine online hyper-parameter adjustment scheme into meta-learning algorithm, which reduces the need to tune hyper-parameters. Specifically, a basic learning rate for each training task is put forward. Besides, the proposed algorithm dynamically adapts multiple basic learning rate and a shared meta-learning rate through conducting gradient descent alongside the initial optimization steps. In addition, the sensitivity with respect to hyper-parameter choices in the proposed approach are also discussed compared with model-agnostic meta-learning method. The experimental results on reinforcement learning problems demonstrate MPML algorithm is easy to implement and delivers more highly competitive performance than existing meta-learning methods on a diverse set of challenging control tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI