强化学习
贝尔曼方程
功能(生物学)
函数逼近
可微函数
数学优化
价值(数学)
计算机科学
应用数学
数学
人工智能
人工神经网络
机器学习
数学分析
进化生物学
生物
作者
Richard S. Sutton,David McAllester,Satinder Singh,Yishay Mansour
出处
期刊:Neural Information Processing Systems
日期:1999-11-29
卷期号:12: 1057-1063
被引量:4853
摘要
Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.
科研通智能强力驱动
Strongly Powered by AbleSci AI