同步(交流)
强化学习
控制理论(社会学)
计算机科学
最优控制
观察员(物理)
控制器(灌溉)
数学优化
控制(管理)
数学
人工智能
生物
量子力学
物理
频道(广播)
计算机网络
农学
作者
Hamidreza Modares,Subramanya Nageshrao,Gabriel A. D. Lopes,Robert Babuška,Frank L. Lewis
出处
期刊:Automatica
[Elsevier]
日期:2016-06-15
卷期号:71: 334-341
被引量:150
标识
DOI:10.1016/j.automatica.2016.05.017
摘要
This paper considers optimal output synchronization of heterogeneous linear multi-agent systems. Standard approaches to output synchronization of heterogeneous systems require either the solution of the output regulator equations or the incorporation of a p-copy of the leader's dynamics in the controller of each agent. By contrast, in this paper neither one is needed. Moreover, here both the leader's and the follower's dynamics are assumed to be unknown. First, a distributed adaptive observer is designed to estimate the leader's state for each agent. The output synchronization problem is then formulated as an optimal control problem and a novel model-free off-policy reinforcement learning algorithm is developed to solve the optimal output synchronization problem online in real time. It is shown that this optimal distributed approach implicitly solves the output regulation equations without actually doing so. Simulation results are provided to verify the effectiveness of the proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI