标识符
最优控制
控制理论(社会学)
有界函数
李雅普诺夫函数
指数稳定性
贝尔曼方程
趋同(经济学)
人工神经网络
非线性系统
强化学习
数学优化
自适应控制
汉密尔顿-雅各比-贝尔曼方程
计算机科学
数学
控制(管理)
人工智能
物理
量子力学
数学分析
经济
程序设计语言
经济增长
作者
Shubhendu Bhasin,Rushikesh Kamalapurkar,M. Johnson,Kyriakos G. Vamvoudakis,Frank L. Lewis,Warren E. Dixon
出处
期刊:Automatica
[Elsevier]
日期:2012-10-23
卷期号:49 (1): 82-92
被引量:503
标识
DOI:10.1016/j.automatica.2012.09.019
摘要
An online adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem for continuous-time uncertain nonlinear systems. A novel actor–critic–identifier (ACI) is proposed to approximate the Hamilton–Jacobi–Bellman equation using three neural network (NN) structures—actor and critic NNs approximate the optimal control and the optimal value function, respectively, and a robust dynamic neural network identifier asymptotically approximates the uncertain system dynamics. An advantage of using the ACI architecture is that learning by the actor, critic, and identifier is continuous and simultaneous, without requiring knowledge of system drift dynamics. Convergence of the algorithm is analyzed using Lyapunov-based adaptive control methods. A persistence of excitation condition is required to guarantee exponential convergence to a bounded region in the neighborhood of the optimal control and uniformly ultimately bounded (UUB) stability of the closed-loop system. Simulation results demonstrate the performance of the actor–critic–identifier method for approximate optimal control.
科研通智能强力驱动
Strongly Powered by AbleSci AI