强化学习
控制理论(社会学)
执行机构
计算机科学
容错
非线性系统
有界函数
李雅普诺夫函数
分散系统
断层(地质)
功能(生物学)
控制(管理)
最优控制
控制工程
数学优化
数学
分布式计算
工程类
人工智能
地质学
数学分析
物理
地震学
生物
进化生物学
量子力学
作者
Yanwei Zhao,Huanqing Wang,Ning Xu,Guangdeng Zong,Xudong Zhao
标识
DOI:10.1016/j.chaos.2022.113034
摘要
This paper addresses the decentralized fault tolerant control problem for interconnected nonlinear systems under a reinforcement learning strategy. The system under consideration includes unknown actuator faults and asymmetric input constraints. By constructing an improved cost function related to the estimation of the actuator faults for each auxiliary subsystem, the original control issue is converted into finding an array of decentralized optimal control policies. Then, we prove that these optimal control policies can ensure the entire system to be stable in the sense of uniform ultimate boundedness. Moreover, a single critic network architecture is developed to acquire the solutions of Hamilton–Jacobi–Bellman equations, which simplifies the architecture of the reinforcement learning algorithm. All signals in the closed-loop auxiliary subsystems are uniformly ultimately bounded based on the Lyapunov theory, and numerical and practical simulation examples are conducted to validate the effectiveness of the designed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI