强化学习
马尔可夫决策过程
计算机科学
能源消耗
能源管理
电
电价
马尔可夫过程
需求响应
人工智能
增强学习
运筹学
数学优化
能量(信号处理)
工程类
电力市场
统计
数学
电气工程
作者
Renzhi Lu,Zhenyu Jiang,Huaming Wu,Yuemin Ding,Dong Wang,Hai‐Tao Zhang
出处
期刊:IEEE Transactions on Industrial Informatics
[Institute of Electrical and Electronics Engineers]
日期:2022-06-16
卷期号:19 (3): 2662-2673
被引量:37
标识
DOI:10.1109/tii.2022.3183802
摘要
Residential energy consumption continues to climb steadily, requiring intelligent energy management strategies to reduce power system pressures and residential electricity bills. However, it is challenging to design such strategies due to the random nature of electricity pricing, appliance demand, and user behavior. This article presents a novel reward shaping (RS)-based actor–critic deep reinforcement learning (ACDRL) algorithm to manage the residential energy consumption profile with limited information about the uncertain factors. Specifically, the interaction between the energy management center and various residential loads is modeled as a Markov decision process that provides a fundamental mathematical framework to represent the decision-making in situations where outcomes are partially random and partially influenced by the decision-maker control signals, in which the key elements containing the agent, environment, state, action, and reward are carefully designed, and the electricity price is considered as a stochastic variable. An RS-ACDRL algorithm is then developed, incorporating both the actor and critic network and an RS mechanism, to learn the optimal energy consumption schedules. Several case studies involving real-world data are conducted to evaluate the performance of the proposed algorithm. Numerical results demonstrate that the proposed algorithm outperforms state-of-the-art RL methods in terms of learning speed, solution optimality, and cost reduction.
科研通智能强力驱动
Strongly Powered by AbleSci AI