最优控制
控制理论(社会学)
强化学习
沉降时间
控制器(灌溉)
汉密尔顿-雅各比-贝尔曼方程
数学优化
理论(学习稳定性)
趋同(经济学)
有界函数
计算机科学
自适应控制
数学
控制(管理)
控制工程
工程类
人工智能
农学
数学分析
机器学习
生物
经济
阶跃响应
经济增长
作者
Mahdi Niroomand,Reihaneh Kardehi Moghaddam,Hamidreza Modares,Mohammad Bagher Naghibi Sistani
标识
DOI:10.1177/10775463241307703
摘要
This paper presents a fixed-time optimal control design approach using reinforcement learning (RL) that guarantees not only fixed-time convergence of the learning algorithm to an optimal controller but also fixed-time stability of the learned control solution. To ensure the former, zero-finding capabilities of the zeroing neural networks (ZNNs) are leveraged, and novel adaptive laws are presented accordingly. To ensure the latter, conditions on the cost function are provided for which its corresponding optimal controller assures the fixed-time stability of the closed-loop system. It is also shown that imposing a fixed-time stability constraint on the infinite-horizon optimal control solution actually solves the classical fixed-final-time (FFTM) finite-horizon optimal control problem. The Hamilton–Jacobi–Bellman (HJB) equation for the FFTM optimal control problem is time-varying, which makes it hard or even impossible to learn it directly online using RL. The presented approach bypasses this difficulty by developing an online solution for infinite-horizon optimal control problems under fixed-time stability constraints and with fixed-time convergent tuning laws. This approach makes learning and closed-loop system settling times predictable, tunable, and bounded. Simulation results for fixed-time optimal adaptive stabilization of a torsional pendulum system clarify this new design for nonlinear optimal control theory.
科研通智能强力驱动
Strongly Powered by AbleSci AI