Application of an Off‐Policy Reinforcement Learning Algorithm for H∞${{H}_\infty }$ Control Design of Nonlinear Structural Systems With Completely Unknown Dynamics
ABSTRACT This paper proposes a model‐free and online off‐policy algorithm based on reinforcement learning (RL) for vibration attenuation of earthquake‐excited structures, through designing an optimal controller. This design relies on solving a two‐player zero‐sum game theory with a Hamilton–Jacobi–Isaacs (HJI) equation, which is extremely difficult, or often impossible, to be solved for the value function and the related optimal controller. The proposed strategy uses an actor‐critic‐disturbance structure to learn the solution of the HJI equation online and forward in time, without requiring any knowledge of the system dynamics. In addition, the control and disturbance policies and value function are approximated by the actor, the disturbance, and the critic neural networks (NNs), respectively. Implementing the policy iteration technique, the NNs’ weights of the proposed model are calculated using the least square (LS) method in each iteration. In the present study, the convergence of the proposed algorithm is investigated through two distinct examples. Furthermore, the performance of this off‐policy RL strategy is studied in reducing the response of a seismically excited nonlinear structure with an active mass damper (AMD) for two cases of state feedback. The simulation results prove the effectiveness of the proposed algorithm in application to civil engineering structures.