已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Proximal Policy Optimization With Policy Feedback

强化学习 价值(数学) 贝尔曼方程 功能(生物学) 计算机科学 建筑 差异(会计) 过程(计算) 价值网络 趋同(经济学) 数学优化 人工智能 机器学习 数学 经济 生物 操作系统 会计 进化生物学 艺术 视觉艺术 管理 经济增长 商业模式
作者
Yang Gu,Yuhu Cheng,C. L. Philip Chen,Xuesong Wang
出处
期刊:IEEE transactions on systems, man, and cybernetics [Institute of Electrical and Electronics Engineers]
卷期号:52 (7): 4600-4610 被引量:51
标识
DOI:10.1109/tsmc.2021.3098451
摘要

Proximal policy optimization (PPO) is a deep reinforcement learning algorithm based on the actor–critic (AC) architecture. In the classic AC architecture, the Critic (value) network is used to estimate the value function while the Actor (policy) network optimizes the policy according to the estimated value function. The efficiency of the classic AC architecture is limited due that the policy does not directly participate in the value function update. The classic AC architecture will make the value function estimation inaccurate, which will affect the performance of the PPO algorithm. For improvement, we designed a novel AC architecture with policy feedback (AC-PF) by introducing the policy into the update process of the value function and further proposed the PPO with policy feedback (PPO-PF). For the AC-PF architecture, the policy-based expected (PBE) value function and discount reward formulas are designed by drawing inspiration from expected Sarsa. In order to enhance the sensitivity of the value function to the change of policy and to improve the accuracy of PBE value estimation at the early learning stage, we proposed a policy update method based on the clipped discount factor. Moreover, we specifically defined the loss functions of the policy network and value network to ensure that the policy update of PPO-PF satisfies the unbiased estimation of the trust region. Experiments on Atari games and control tasks show that compared to PPO, PPO-PF has faster convergence speed, higher reward, and smaller variance of reward.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
AAA发布了新的文献求助10
刚刚
刚刚
CodeCraft应助Leo采纳,获得10
刚刚
科研通AI6.1应助健忘洋葱采纳,获得10
3秒前
852应助TM采纳,获得10
4秒前
4秒前
5秒前
斯文败类应助苗条小猫咪采纳,获得10
6秒前
kkaky完成签到,获得积分20
9秒前
9秒前
阳光的衫完成签到 ,获得积分10
10秒前
11秒前
QAQ完成签到 ,获得积分10
11秒前
11秒前
12秒前
13秒前
bkagyin应助昏睡的衬衫采纳,获得10
13秒前
CScs25完成签到 ,获得积分10
13秒前
wanci应助AAA采纳,获得10
14秒前
充电宝应助爱sun采纳,获得10
15秒前
有点意思发布了新的文献求助10
17秒前
断棍豪斯发布了新的文献求助10
17秒前
咕咕发布了新的文献求助10
17秒前
17秒前
加菲丰丰举报求助违规成功
18秒前
大力的灵雁举报求助违规成功
18秒前
xzy998举报求助违规成功
18秒前
18秒前
打打应助jingfeng采纳,获得10
21秒前
lizishu应助Victor采纳,获得10
22秒前
24秒前
加菲丰丰举报求助违规成功
26秒前
大力的灵雁举报求助违规成功
26秒前
xzy998举报求助违规成功
26秒前
26秒前
28秒前
大模型应助不知道叫啥采纳,获得10
30秒前
典雅代曼应助咕咕采纳,获得10
32秒前
悦耳听芹完成签到 ,获得积分10
35秒前
36秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Applied Min-Max Approach to Missile Guidance and Control 5000
Metallurgy at high pressures and high temperatures 2000
Inorganic Chemistry Eighth Edition 1200
The Organic Chemistry of Biological Pathways Second Edition 1000
Anionic polymerization of acenaphthylene: identification of impurity species formed as by-products 1000
Standards for Molecular Testing for Red Cell, Platelet, and Neutrophil Antigens, 7th edition 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6325506
求助须知:如何正确求助?哪些是违规求助? 8141577
关于积分的说明 17070323
捐赠科研通 5378020
什么是DOI,文献DOI怎么找? 2854059
邀请新用户注册赠送积分活动 1831718
关于科研通互助平台的介绍 1682768