强化学习
控制理论(社会学)
计算机科学
控制器(灌溉)
非线性系统
人工神经网络
最优控制
国家(计算机科学)
理论(学习稳定性)
上下界
控制系统
控制(管理)
数学优化
数学
人工智能
算法
工程类
机器学习
物理
量子力学
数学分析
电气工程
农学
生物
作者
Xiangyu Chen,Weiwei Sun,Xinci Gao,LI Yong-shu
摘要
Abstract The optimal control issue of discrete‐time nonlinear unknown systems with time‐delay control input is the focus of this work. In order to reduce communication costs, a reinforcement learning‐based event‐triggered controller is proposed. By applying the proposed control method, closed‐loop system's asymptotic stability is demonstrated, and a maximum upper bound for the infinite‐horizon performance index can be calculated beforehand. The event‐triggered condition requires the next time state information. In an effort to forecast the next state and achieve optimal control, three neural networks (NNs) are introduced and used to approximate system state, value function, and optimal control. Additionally, a M NN is utilized to cope with the time‐delay term of control input. Moreover, taking the estimation errors of NNs into account, the uniformly ultimately boundedness of state and NNs weight estimation errors can be guaranteed. Ultimately, the validity of proposed approach is illustrated by simulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI