强化学习
微电网
需求响应
计算机科学
智能电网
可靠性(半导体)
可控性
分布式计算
可靠性工程
电
控制(管理)
人工智能
功率(物理)
工程类
物理
数学
量子力学
应用数学
电气工程
作者
Muhammad Ikram,Salman Ahmed,Safdar Nawaz Khan Marwat,Muhammad Nasir
标识
DOI:10.1109/icece58062.2023.10092514
摘要
Demand response (DR) is an approach that encourages consumers to shape consumption patterns in peak demands for the reliability of the power system and cost minimization. The optimal DR scheme has not only leverages the distribution system operators (DSOs) but also the consumers in the energy network. This paper introduced a multi-agent coordination control and reinforcement learning approach for optimal DR management. Each microgrid is considered an agent for the state and action estimation in the smart grid and programmed rewards and incentive plans. In this regard, the Multi-agent Markov game (MAMG) is utilized for the state and action. At the same time, the reward is articulated through reinforcement learning deep Q-network (DQN) and deep deterministic policy gradient (DDPG) schemes. The proposed DR model also encourages consumer participation for long-term incentivized benefits through integrating battery energy storage systems (BESS) in the SG network. The reliability of DQN and DDPG schemes is demonstrated and observed that the dynamically changing electricity cost is reduced by 19.86%. Moreover, the controllability of complex microgrids is achieved with limited control information to ensure the integrity and reliability of the network. The proposed schemes were simulated and evaluated in MATLAB and Python (PyCharm IDE) environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI