强化学习
灵活性(工程)
计算机科学
人工智能
功能(生物学)
机器人
弹道
人机交互
错误驱动学习
天文
数学
进化生物学
生物
统计
物理
作者
Paul F. Christiano,Jan Leike,T. B. Brown,Miljan Martic,Shane Legg,Dario Amodei
出处
期刊:Neural Information Processing Systems
日期:2017-06-12
卷期号:30: 4299-4307
被引量:212
摘要
For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. Our approach separates learning the goal from learning the behavior to achieve it. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on about 0.1% of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.
科研通智能强力驱动
Strongly Powered by AbleSci AI