强化学习
价值(数学)
贝尔曼方程
功能(生物学)
计算机科学
建筑
差异(会计)
过程(计算)
价值网络
趋同(经济学)
数学优化
人工智能
机器学习
数学
经济
生物
操作系统
会计
进化生物学
艺术
视觉艺术
管理
经济增长
商业模式
作者
Yang Gu,Yuhu Cheng,C. L. Philip Chen,Xuesong Wang
出处
期刊:IEEE transactions on systems, man, and cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2021-07-29
卷期号:52 (7): 4600-4610
被引量:40
标识
DOI:10.1109/tsmc.2021.3098451
摘要
Proximal policy optimization (PPO) is a deep reinforcement learning algorithm based on the actor–critic (AC) architecture. In the classic AC architecture, the Critic (value) network is used to estimate the value function while the Actor (policy) network optimizes the policy according to the estimated value function. The efficiency of the classic AC architecture is limited due that the policy does not directly participate in the value function update. The classic AC architecture will make the value function estimation inaccurate, which will affect the performance of the PPO algorithm. For improvement, we designed a novel AC architecture with policy feedback (AC-PF) by introducing the policy into the update process of the value function and further proposed the PPO with policy feedback (PPO-PF). For the AC-PF architecture, the policy-based expected (PBE) value function and discount reward formulas are designed by drawing inspiration from expected Sarsa. In order to enhance the sensitivity of the value function to the change of policy and to improve the accuracy of PBE value estimation at the early learning stage, we proposed a policy update method based on the clipped discount factor. Moreover, we specifically defined the loss functions of the policy network and value network to ensure that the policy update of PPO-PF satisfies the unbiased estimation of the trust region. Experiments on Atari games and control tasks show that compared to PPO, PPO-PF has faster convergence speed, higher reward, and smaller variance of reward.
科研通智能强力驱动
Strongly Powered by AbleSci AI