模型预测控制
强化学习
约束满足
计算机科学
控制器(灌溉)
适应性
控制理论(社会学)
数学优化
约束(计算机辅助设计)
控制(管理)
控制工程
人工智能
工程类
数学
生物
机械工程
概率逻辑
生态学
农学
作者
Javier Arroyo,Carlo Manna,Fred Spiessens,Lieve Helsen
出处
期刊:Applied Energy
[Elsevier]
日期:2022-03-01
卷期号:309: 118346-118346
被引量:44
标识
DOI:10.1016/j.apenergy.2021.118346
摘要
Buildings need advanced control for the efficient and climate-neutral use of their energy systems. Model predictive control (MPC) and reinforcement learning (RL) arise as two powerful control techniques that have been extensively investigated in the literature for their application to building energy management. These methods show complementary qualities in terms of constraint satisfaction, computational demand, adaptability, and intelligibility, but usually a choice is made between both approaches. This paper compares both control approaches and proposes a novel algorithm called reinforced predictive control (RL-MPC) that merges their relative merits. First, the complementarity between RL and MPC is emphasized on a conceptual level by commenting on the main aspects of each method. Second, the RL-MPC algorithm is described that effectively combines features from each approach, namely state estimation, dynamic optimization, and learning. Finally, MPC, RL, and RL-MPC are implemented and evaluated in BOPTEST, a standardized simulation framework for the assessment of advanced control algorithms in buildings. The results indicate that pure RL cannot provide constraint satisfaction when using a control formulation equivalent to MPC and the same controller model for learning. The new RL-MPC algorithm can meet constraints and provide similar performance to MPC while enabling continuous learning and the possibility to deal with uncertain environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI