强化学习
微电网
李雅普诺夫函数
钢筋
计算机科学
能源管理
控制工程
控制理论(社会学)
能量(信号处理)
人工智能
工程类
控制(管理)
数学
结构工程
物理
非线性系统
统计
量子力学
作者
Guokai Hao,Yuanzheng Li,Yang Li,Lin Jiang,Zhigang Zeng
标识
DOI:10.1109/tnnls.2024.3496932
摘要
The rapid development of renewable energy sources (RESs) has led to their increased integration into microgrids (MGs), emphasizing the need for safe and efficient energy management in MG operations. We investigate the methods of MG energy management, primarily categorized into model-based and model-free approaches. Due to a lack of incremental knowledge, model-based methods need to be reengineered for new scenarios during the optimization process, leading to reduced computational efficiency. In contrast, model-free methods can obtain incremental knowledge via trial-and-error in the training phase, and output energy management scheme rapidly. However, ensuring the safety of the scheme during the training phases poses significant challenges. To address these challenges, we propose a safe reinforcement learning (SRL) framework. The proposed SRL framework initially includes a safety assessment optimization model (SAOM) to evaluate scheme constraints and refine unsafe schemes for ensuring MG safety. Subsequently, based on SAOM, the MG energy management issue is formulated as an assess-based constrained Markov decision process (A-CMDP), enabling the SRL can be adopted in this issue. After that, we adopt a Lyapunov-based safety policy optimization for agent policy learning to ensure that policy updates are confined within a safe boundary, theoretically ensuring the safety of the MG throughout the learning process. Numerical studies highlight the superior performance of our proposed method. Specifically, the SRL framework effectively learns energy management policy, ensures MG safety, and demonstrates outstanding outcomes in the economic operation of MG.
科研通智能强力驱动
Strongly Powered by AbleSci AI