控制理论(社会学)
计算机科学
强化学习
Lyapunov稳定性
控制器(灌溉)
跟踪误差
自适应控制
理论(学习稳定性)
多智能体系统
补偿(心理学)
梯度下降
李雅普诺夫函数
控制(管理)
人工神经网络
人工智能
机器学习
生物
量子力学
物理
非线性系统
心理学
精神分析
农学
作者
Hongyi Li,Ying Wu,Mou Chen,Renquan Lu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2021-07-01
卷期号:34 (1): 144-156
被引量:133
标识
DOI:10.1109/tnnls.2021.3090570
摘要
This article proposes a fault-tolerant adaptive multigradient recursive reinforcement learning (RL) event-triggered tracking control scheme for strict-feedback discrete-time multiagent systems. The multigradient recursive RL algorithm is used to avoid the local optimal problem that may exist in the gradient descent scheme. Different from the existing event-triggered control results, a new lemma about the relative threshold event-triggered control strategy is proposed to handle the compensation error, which can improve the utilization of communication resources and weaken the negative impact on tracking accuracy and closed-loop system stability. To overcome the difficulty caused by sensor fault, a distributed control method is introduced by adopting the adaptive compensation technique, which can effectively decrease the number of online estimation parameters. Furthermore, by using the multigradient recursive RL algorithm with less learning parameters, the online estimation time can be effectively reduced. The stability of closed-loop multiagent systems is proved by using the Lyapunov stability theorem, and it is verified that all signals are semiglobally uniformly ultimately bounded. Finally, two simulation examples are given to show the availability of the presented control scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI