强化学习
计算机科学
稳健性(进化)
趋同(经济学)
控制理论(社会学)
弹道
人工神经网络
标识符
方案(数学)
跟踪(教育)
最优控制
人工智能
控制(管理)
数学优化
数学
物理
天文
程序设计语言
化学
经济
数学分析
基因
心理学
生物化学
经济增长
教育学
作者
Ning Wang,Ying Gao,Yang Chen,Xuefeng Zhang
标识
DOI:10.1016/j.neucom.2021.04.133
摘要
In this paper, subject to completely unknown system dynamics and input constraints, a reinforcement learning-based finite-time trajectory tracking control (RLFTC) scheme is innovatively created for an unmanned surface vehicle (USV) by combining actor-critic reinforcement learning (RL) mechanism with finite-time control technique. Unlike previous RL-based tracking which requires infinite-time convergence thereby rather sensitive to complex unknowns, an actor-critic finite-time control structure is created by employing adaptive neural network identifiers to recursively update actor and critic, such that learning-based robustness can be sufficiently enhanced. Moreover, deduced from the Bellman error formulation, the proposed RLFTC is directly optimized in a finite-time manner. Theoretical analysis eventually shows that the proposed RLFTC scheme can ensure semi-global practical finite-time stability (SGPFS) for a closed-loop USV system and tracking errors converge to an arbitrarily small neighborhood of the origin in a finite time, subject to optimal cost. Both mathematical simulation and virtual-reality experiments demonstrate remarkable effectiveness and superiority of the proposed RLFTC scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI