强化学习
计算机科学
最优控制
同步(交流)
代数Riccati方程
Riccati方程
控制器(灌溉)
跟踪(教育)
控制理论(社会学)
国家(计算机科学)
数学优化
控制(管理)
人工智能
数学
算法
微分方程
生物
频道(广播)
数学分析
计算机网络
教育学
心理学
农学
作者
Yong Xu,Zheng‐Guang Wu,Wei‐Wei Che,Deyuan Meng
标识
DOI:10.1007/s11432-022-3729-7
摘要
This paper focuses on the optimal output synchronization control problem of heterogeneous multiagent systems (HMASs) subject to nonidentical communication delays by a reinforcement learning method. Compared with existing studies assuming that the precise model of the leader is globally or distributively accessible to all or some of the followers, the leader's precise dynamical model is entirely inaccessible to all the followers in this paper. A data-based learning algorithm is first proposed to reconstruct the leader's unknown system matrix online. A distributed predictor subject to communication delays is further devised to estimate the leader's state, where interaction delays are allowed to be nonidentical. Then, a learning-based local controller, together with a discounted performance function, is projected to reach the optimal output synchronization. Bellman equations and game algebraic Riccati equations are constructed to learn the optimal solution by developing a model-based reinforcement learning (RL) algorithm online without solving regulator equations, which is followed by a model-free off-policy RL algorithm to relax the requirement of all agents' dynamics faced by the model-based RL algorithm. The optimal tracking control of HMASs subject to unknown leader dynamics and communication delays is shown to be solvable under the proposed RL algorithms. Finally, the effectiveness of theoretical analysis is verified by numerical simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI