稳健性(进化)
控制理论(社会学)
计算机科学
强化学习
适应性
直流电动机
降压式变换器
控制工程
工程类
电压
人工智能
控制(管理)
电气工程
基因
生物
生物化学
化学
生态学
作者
Tianxiao Yang,Chenggang Cui,Chuanlin Zhang,Jun Yang
标识
DOI:10.1080/23307706.2023.2201587
摘要
Recent application studies of deep reinforcement learning (DRL) in power electronic systems have successfully demonstrated its superiority over conventional model-based control design methods, stemming from its adaption and self-optimisation capabilities. However, the inevitable gap between offline training and real-life application presents a significant challenge for practical implementation, owing to its insufficient robustness. With this in mind, this paper proposes a novel robust DRL controller by fusing an extended state observer (ESO) for the DC–DC buck converter system feeding constant power loads (CPLs). To be specific, the mismatched lumped terms are reconstructed by an ESO in real time, and then fed forward into the agent's action, aiming to improve the adaptability to parameter variations of the real-life converter systems. By carefully conducting simulation and experimental tests, the robustness enhancement ability of the proposed framework compared with model-free DRL and conventional PI controllers are clearly verified.
科研通智能强力驱动
Strongly Powered by AbleSci AI