强化学习
瓶颈
计算机科学
架空(工程)
控制器(灌溉)
趋同(经济学)
分布式计算
人工智能
分布式算法
嵌入式系统
农学
经济增长
生物
操作系统
经济
作者
Tianyi Chen,Kaiqing Zhang,Georgios B. Giannakis,Tamer Başar
出处
期刊:IEEE Transactions on Control of Network Systems
[Institute of Electrical and Electronics Engineers]
日期:2021-05-06
卷期号:9 (2): 917-929
被引量:34
标识
DOI:10.1109/tcns.2021.3078100
摘要
This article deals with distributed policy optimization in reinforcement learning, which involves a central controller and a group of learners. In particular, two typical settings encountered in several applications are considered: multiagent reinforcement learning (RL) and parallel RL , where frequent information exchanges between the learners and the controller are required. For many practical distributed systems, however, the overhead caused by these frequent communication exchanges is considerable, and becomes the bottleneck of the overall performance. To address this challenge, a novel policy gradient approach is developed for solving distributed RL. The novel approach adaptively skips the policy gradient communication during iterations, and can reduce the communication overhead without degrading learning performance. It is established analytically that: i) the novel algorithm has a convergence rate identical to that of the plain-vanilla policy gradient; while ii) if the distributed learners are heterogeneous in terms of their reward functions, the number of communication rounds needed to achieve a desirable learning accuracy is markedly reduced. Numerical experiments corroborate the communication reduction attained by the novel algorithm compared to alternatives.
科研通智能强力驱动
Strongly Powered by AbleSci AI