强化学习
控制理论(社会学)
控制器(灌溉)
自适应控制
人工神经网络
非线性系统
计算机科学
离散时间和连续时间
理论(学习稳定性)
李雅普诺夫函数
有界函数
函数逼近
功能(生物学)
控制工程
控制(管理)
数学
工程类
人工智能
机器学习
生物
统计
进化生物学
量子力学
物理
数学分析
农学
作者
Dong Liu,Guang‐Hong Yang
标识
DOI:10.1080/00207721.2018.1498557
摘要
This paper deals with the model-free adaptive control (MFAC) based on the reinforcement learning (RL) strategy for a family of discrete-time nonlinear processes. The controller is constructed based on the approximation ability of neural network architecture, a new actor-critic algorithm for neural network control problem is developed to estimate the strategic utility function and the performance index function. More specifically, the novel RL-based MFAC scheme is reasonable to design the controller without need to estimate y(k+1) information. Furthermore, based on Lyapunov stability analysis method, the closed-loop systems can be ensured uniformly ultimately bounded. Simulations are shown to validate the theoretical results.
科研通智能强力驱动
Strongly Powered by AbleSci AI