瞬态(计算机编程)
强化学习
理论(学习稳定性)
电力系统
控制理论(社会学)
计算机科学
最优控制
分散系统
网格
内点法
控制工程
控制(管理)
数学优化
功率(物理)
工程类
人工智能
数学
算法
机器学习
物理
几何学
量子力学
操作系统
作者
Hongtai Zeng,Yanzhen Zhou,Qinglai Guo,Zhongmin Cai,Hongbin Sun
标识
DOI:10.17775/cseejpes.2020.04610
摘要
Preventive transient stability control is an effective measure for the power system to withstand high-probability severe contingencies. It is mathematically an optimal power flow problem with transient stability constraints. Due to the constraints involved the differential algebraic equations of transient stability, it is difficult and time-consuming to solve this problem. To address these issues, this paper presents a novel deep reinforcement learning (DRL) framework for preventive transient stability control of power systems. Distributed deep deterministic policy gradient is utilized to train a DRL agent that can learn its control policy through massive interactions with a grid simulator. Once properly trained, the DRL agent can instantaneously provide effective strategies to adjust the system to a safe operating point with a near-optimal operation cost. The effectiveness of the proposed method is verified through numerical experiments conducted on New England 39-bus system and NPCC 140-bus system.
科研通智能强力驱动
Strongly Powered by AbleSci AI