控制理论(社会学)
强化学习
控制器(灌溉)
计算机科学
人工神经网络
非线性系统
有界函数
李雅普诺夫函数
采样(信号处理)
跟踪误差
理论(学习稳定性)
自适应控制
事件(粒子物理)
数学
人工智能
控制(管理)
机器学习
滤波器(信号处理)
物理
数学分析
生物
量子力学
计算机视觉
农学
作者
Fanghua Tang,Ben Niu,Guangdeng Zong,Xudong Zhao,Ning Xu
标识
DOI:10.1016/j.neunet.2022.06.039
摘要
In this paper, an event-triggered control scheme with periodic characteristic is developed for nonlinear discrete-time systems under an actor-critic architecture of reinforcement learning (RL). The periodic event-triggered mechanism (ETM) is constructed to decide whether the sampling data are delivered to controllers or not. Meanwhile, the controller is updated only when the event-triggered condition deviates from a prescribed threshold. Compared with traditional continuous ETMs, the proposed periodic ETM can guarantee a minimal lower bound of the inter-event intervals and avoid sampling calculation point-to-point, which means that the partial communication resources can be efficiently economized. The critic and actor neural networks (NNs), consisting of radial basis function neural networks (RBFNNs), aim to approximate the unknown long-term performance index function and the ideal event-triggered controller, respectively. A rigorous stability analysis based on the Lyapunov difference method is provided to substantiate that the closed-loop system can be stabilized. All error signals of the closed-loop system are uniformly ultimately bounded (UUB) under the guidance of the proposed control scheme. Finally, two simulation examples are given to validate the effectiveness of the control design.
科研通智能强力驱动
Strongly Powered by AbleSci AI