A Novel Integral Reinforcement Learning-Based Control Method Assisted by Twin Delayed Deep Deterministic Policy Gradient for Solid Oxide Fuel Cell in DC Microgrid
期刊:IEEE Transactions on Sustainable Energy [Institute of Electrical and Electronics Engineers] 日期:2022-11-24卷期号:14 (1): 688-703被引量:14
标识
DOI:10.1109/tste.2022.3224179
摘要
This paper proposes a new online integral reinforcement learning (IRL)-based control algorithm for the solid oxide fuel cell (SOFC) to overcome the long-lasting problems of model dependency and sensitivity to offline training dataset in the existing SOFC control approaches. The proposed method automatically updates the optimal control gains through the online neural network training. Unlike the other online learning-based control methods that rely on the assumption of initial stabilizing control or trial-and-error based initial control policy search, the proposed method employs the offline twin delayed deep deterministic policy gradient (TD3) algorithm to systematically determine the initial stabilizing control policy. Compared to the conventional IRL-based control, the proposed method contributes to greatly reduce the computational burden without compromising the control performance. The excellent performance of the proposed method is verified by hardware-in-the-loop experiments.