强化学习
暖通空调
计算机科学
能源消耗
控制器(灌溉)
楼宇自动化
控制(管理)
能量(信号处理)
实时计算
人工智能
空调
工程类
统计
物理
电气工程
农学
热力学
生物
机械工程
数学
作者
Amirhossein Azimi,Omid Akbari
出处
期刊:e-Prime
[Elsevier]
日期:2024-07-27
卷期号:9: 100700-100700
被引量:1
标识
DOI:10.1016/j.prime.2024.100700
摘要
Buildings are responsible for 30 % oof the world's energy consumption, about half of which is consumed by Heating, Ventilation, and Air Conditioning (HVAC) systems. Intelligent control of these can significantly reduce global energy consumption. However, these systems are also one the most important means of providing comfort to occupants within buildings. These two control objectives may conflict with each other, which means that reducing the energy consumption of the HVAC systems may lead to dissatisfaction among buildings' occupants. Thus, in most cases, to handle the trade-off, weights are assigned to these objectives which indicate their importance. The priority of these goals may change throughout the operation, meaning that the value of assigned weights must change. To solve this problem, existing methods needed to be retrained every time that the weights changed, which is time-consuming and costly. Thus, an algorithm that can adapt to weight changes in operation time is needed. In this research, a dynamic multi-objective deep reinforcement learning algorithm is introduced to control the power of the HVAC system. Since it is needed to control the continuous power of the HVAC system, a Deep Deterministic Policy Gradient (DDPG) algorithm empowered by Tunable Training, is proposed which can adapt to changes of objectives weights. The controller can adapt to changes in the importance of objectives during operation and does not need to be retrained. The proposed method is compared with a dynamic multi-objective Deep Q-Network (DQN) Algorithm, which is used for discrete control, in a smart home. The results show that the cost of energy in the proposed method is up to 14 % lower than the DML-DQN algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI