空气净化器
室内空气质量
强化学习
能源消耗
空气质量指数
高效能源利用
控制(管理)
计算机科学
汽车工程
环境科学
工程类
环境工程
人工智能
气象学
机械工程
电气工程
物理
入口
作者
Wenzhe Shang,Junjie Liu,Congcong Wang,Jiayu Li,Xilei Dai
标识
DOI:10.1016/j.buildenv.2023.110556
摘要
PM2.5 has negative impact on human health. Although air purifiers can remove indoor PM2.5 effectively, occupants do not use them well to achieve best performance. It is important to develop automatic control strategy for air purifiers to achieve both indoor air quality and energy efficiency. As traditional air purifier control strategy cannot adapt to the stochastic behavior of residents such as PM2.5 emissions and occupants' window behavior and result in superfluous energy consumption, this study uses the deep reinforcement learning (DeepRL) approach to automatically control the air purifier, which provide better indoor air quality with lower energy consumption. To make the DeepRL applicable in real daily life, we first develop a stochastic model based on measured indoor air quality data, which is able to simulate the indoor PM2.5 process in real time. To improve the energy efficiency of air purifier under this condition, we further trained DeepRL approach to control the air purifier under the simulated PM2.5 process. By virtue of adaption to the stochastic environmental parameters, RL strategy can make the best fit decision in advance to achieve more stable control effect. Comparing to the baseline strategy, both RL-1 and RL-2 show significant improvement in energy efficiency. In specific, RL-1 strategy could reduce 43.7% energy consumption with basically the same indoor PM2.5 concentration level in the best-IAQ scenario, and RL-2 strategy could reduce 40.6% energy consumption and 25.6% frequency of indoor PM2.5 concentration exceed WHO air quality guideline. Thus, it has better comprehensive control performance in the general-IAQ scenario.
科研通智能强力驱动
Strongly Powered by AbleSci AI