强化学习
部分可观测马尔可夫决策过程
计算机科学
电压
图形
电压调节
马尔可夫决策过程
控制理论(社会学)
马尔可夫过程
控制工程
马尔可夫链
工程类
控制(管理)
机器学习
人工智能
马尔可夫模型
电气工程
数学
统计
理论计算机科学
作者
Chaoxu Mu,Zhaoyang Liu,Jun Yan,Hongjie Jia,Xiaoyu Zhang
出处
期刊:IEEE Transactions on Smart Grid
[Institute of Electrical and Electronics Engineers]
日期:2024-03-01
卷期号:15 (2): 1399-1409
被引量:1
标识
DOI:10.1109/tsg.2023.3298807
摘要
Active voltage control (AVC) is a widely-used technique to improve voltage quality essential in the emerging active distribution networks (ADNs). However, the voltage fluctuation caused by intermittent renewable energy makes it difficult for traditional voltage control methods to deal with. In this paper, the voltage control problem is formulated as a decentralized partial observable Markov decision process (Dec-POMDP), and a multi-agent reinforcement learning (MARL) algorithm is developed considering each controllable device as an agent. The new formulation aims to adjust the strategies of agents to stabilize the voltage within the specified range and reduce the network loss. To better represent the mutual interaction between the agents, a graph convolutional network (GCN) is introduced. By aggregating the information of adjacent agents, complex latent features are effectively extracted by the GCN, hence promotes the generation of voltage control strategy for the agents. Meanwhile, a barrier function is applied in MARL to ensure the system voltage within a safe operation range. Comparative studies are conducted with traditional voltage control and other MARL methods on IEEE 33-bus and 141-bus systems, which demonstrate the performance of the proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI