强化学习
单调函数
计算机科学
外稃(植物学)
多样性(控制论)
财产(哲学)
功能(生物学)
贝尔曼方程
数学优化
人工智能
数学
禾本科
生态学
进化生物学
生物
认识论
数学分析
哲学
作者
Jakub Grudzien Kuba,Ruiqing Chen,Muning Wen,Ying Wen,Fanglei Sun,Jun Wang,Yaodong Yang
出处
期刊:Cornell University - arXiv
日期:2021-09-23
被引量:2
标识
DOI:10.48550/arxiv.2109.11251
摘要
Trust region methods rigorously enabled reinforcement learning (RL) agents to learn monotonically improving policies, leading to superior performance on a variety of tasks. Unfortunately, when it comes to multi-agent reinforcement learning (MARL), the property of monotonic improvement may not simply apply; this is because agents, even in cooperative games, could have conflicting directions of policy updates. As a result, achieving a guaranteed improvement on the joint policy where each agent acts individually remains an open challenge. In this paper, we extend the theory of trust region learning to MARL. Central to our findings are the multi-agent advantage decomposition lemma and the sequential policy update scheme. Based on these, we develop Heterogeneous-Agent Trust Region Policy Optimisation (HATPRO) and Heterogeneous-Agent Proximal Policy Optimisation (HAPPO) algorithms. Unlike many existing MARL algorithms, HATRPO/HAPPO do not need agents to share parameters, nor do they need any restrictive assumptions on decomposibility of the joint value function. Most importantly, we justify in theory the monotonic improvement property of HATRPO/HAPPO. We evaluate the proposed methods on a series of Multi-Agent MuJoCo and StarCraftII tasks. Results show that HATRPO and HAPPO significantly outperform strong baselines such as IPPO, MAPPO and MADDPG on all tested tasks, therefore establishing a new state of the art.
科研通智能强力驱动
Strongly Powered by AbleSci AI