强化学习
计算机科学
贝尔曼方程
同步(交流)
控制(管理)
数学优化
最优控制
功能(生物学)
增强学习
多智能体系统
价值(数学)
纳什均衡
人工智能
数学
机器学习
计算机网络
频道(广播)
进化生物学
生物
作者
Jinna Li,Hamidreza Modares,Tianyou Chai,Frank L. Lewis,Lihua Xie
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2017-10-01
卷期号:28 (10): 2434-2445
被引量:147
标识
DOI:10.1109/tnnls.2016.2609500
摘要
This paper develops an off-policy reinforcement learning (RL) algorithm to solve optimal synchronization of multiagent systems. This is accomplished by using the framework of graphical games. In contrast to traditional control protocols, which require complete knowledge of agent dynamics, the proposed off-policy RL algorithm is a model-free approach, in that it solves the optimal synchronization problem without knowing any knowledge of the agent dynamics. A prescribed control policy, called behavior policy, is applied to each agent to generate and collect data for learning. An off-policy Bellman equation is derived for each agent to learn the value function for the policy under evaluation, called target policy, and find an improved policy, simultaneously. Actor and critic neural networks along with least-square approach are employed to approximate target control policies and value functions using the data generated by applying prescribed behavior policies. Finally, an off-policy RL algorithm is presented that is implemented in real time and gives the approximate optimal control policy for each agent using only measured data. It is shown that the optimal distributed policies found by the proposed algorithm satisfy the global Nash equilibrium and synchronize all agents to the leader. Simulation results illustrate the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI