强化学习
计算机科学
能量(信号处理)
分布式发电
能源管理
多智能体系统
分布式计算
人工智能
工程类
可再生能源
电气工程
统计
数学
作者
Lifu Ding,Youkai Cui,Gangfeng Yan,Yaojia Huang,Zhen Fan
标识
DOI:10.1016/j.ijepes.2024.109867
摘要
This paper addresses the problem of distributed energy management in multi-area integrated energy systems (MA-IES) using a multi-agent deep reinforcement learning approach. The MA-IES consists of interconnected electric and thermal networks, incorporating renewable energy sources and heat conversion systems. The objective is to optimize the operation of the system while minimizing operational costs and maximizing renewable energy utilization. We propose a distributed energy management strategy that makes hierarchical decisions on intra-area heat energy and inter-area electric energy. The strategy is based on a multi-agent deep reinforcement learning framework, where each agent represents a component or unit in the MA-IES. We formulate the problem as a Markov Decision Process and employ Q-learning with experience replay and double networks to train the agents. The proposed strategy is evaluated using a simulation of a four-area MA-IES. The results demonstrate significant improvements in energy management compared to traditional methods, with higher renewable energy utilization and lower operational costs. Specifically, the strategy achieves 100% utilization of wind power, and decreases operational costs by 5.53%. Furthermore, it leverages the generalization capabilities of reinforcement learning to respond in real-time to uncertainties in demand and wind power output. The results highlight the advantages of the proposed strategy, making it a promising solution for optimizing the operation of multi-area integrated energy systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI