Yuyang Bai,Siyuan Chen,Jun Jason Zhang,Jian Xu,Tianlu Gao,Xiaohui Wang,Wenzhong Gao
出处
期刊:Applied Energy [Elsevier] 日期:2022-11-18卷期号:330: 120294-120294被引量:14
标识
DOI:10.1016/j.apenergy.2022.120294
摘要
• Regional graph attention is used to efficiently capture neighborhood features. • Value decomposition structure is applied for the training of multi-agents. • Proposed distributed dispatching method learns the more effective strategy. • Proposed algorithm is adaptive to network topology change and time granularity scale. In this article, an adaptive active power rolling dispatch strategy based on distributed deep reinforcement learning is proposed to deal with the uncertainty of high-proportioned renewable energy. For each agent, by using recurrent neural network layers and graph attention layers in its network structure, we aim to improve the generalization ability of the multiple agents in active power flow control. Furthermore, a regional graph attention network algorithm, which can effectively help agents aggregate the regional information of their neighborhood, is proposed to improve the information capture ability of agents. We adopt the structure of ‘centralized training, distributed execution’ to help agents improve the effectiveness of proposed methods in dynamic environments. The case studies demonstrate that the proposed algorithm can help multi-agents learn effective active power control strategies. Each agent has a strong generalization ability in terms of time granularity and network topology. We expect that such an approach can improve the practicability and adaptability of distributed AI method on power system control issues.