强化学习
计算机科学
反推
控制器(灌溉)
控制理论(社会学)
标识符
事件(粒子物理)
非线性系统
最优控制
控制(管理)
自适应控制
人工智能
数学优化
数学
物理
量子力学
农学
生物
程序设计语言
作者
Hao‐Yang Zhu,Yuan‐Xin Li,Shaocheng Tong
标识
DOI:10.1109/tfuzz.2023.3235417
摘要
This article investigates the event-triggered optimized tracking control problem for stochastic nonlinear systems based on reinforcement learning (RL). By using the backstepping strategy, an adaptive RL algorithm is performed under the identifier-critic-actor architecture to achieve event-triggered optimized control (ETOC). Moreover, a novel dynamically adjustable event-triggered mechanism is delicately designed, which adjusts the triggering threshold online to economize communication resources and reduce the computation burden. To overcome the difficulty that the virtual control signals are discontinuous due to the state-triggering, the virtual controllers are designed with the continuous sampling states signals, and the actual optimal controller is redesigned by using the triggered states in the last step. Furthermore, the proposed ETOC in this article has significant advantages in terms of saving network resources because the event-triggered mechanism is employed in the sensor-to-controller channel and the event-sampled states are utilized to directly activate the control actions. Finally, it can be guaranteed that all signals of the stochastic system are bounded under the presented ETOC method. A simulation example is carried out to illustrate the effectiveness of the proposed ETOC algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI