好奇心
强化学习
一般化
计算机科学
任务(项目管理)
多样性(控制论)
人工智能
钢筋
代表(政治)
功能(生物学)
机器学习
心理学
工程类
数学
社会心理学
数学分析
系统工程
进化生物学
政治
法学
政治学
生物
作者
Nicolas Bougie,Ryutaro Ichise
出处
期刊:Machine Learning
[Springer Nature]
日期:2019-10-10
卷期号:109 (3): 493-512
被引量:22
标识
DOI:10.1007/s10994-019-05845-8
摘要
Abstract Reinforcement learning methods rely on rewards provided by the environment that are extrinsic to the agent. However, many real-world scenarios involve sparse or delayed rewards. In such cases, the agent can develop its own intrinsic reward function called curiosity to enable the agent to explore its environment in the quest of new skills. We propose a novel end-to-end curiosity mechanism for deep reinforcement learning methods, that allows an agent to gradually acquire new skills. Our method scales to high-dimensional problems, avoids the need of directly predicting the future, and, can perform in sequential decision scenarios. We formulate the curiosity as the ability of the agent to predict its own knowledge about the task. We base the prediction on the idea of skill learning to incentivize the discovery of new skills, and guide exploration towards promising solutions. To further improve data efficiency and generalization of the agent, we propose to learn a latent representation of the skills. We present a variety of sparse reward tasks in MiniGrid, MuJoCo, and Atari games. We compare the performance of an augmented agent that uses our curiosity reward to state-of-the-art learners. Experimental evaluation exhibits higher performance compared to reinforcement learning models that only learn by maximizing extrinsic rewards.
科研通智能强力驱动
Strongly Powered by AbleSci AI