有机朗肯循环
强化学习
控制工程
背景(考古学)
一般化
控制器(灌溉)
工程类
控制(管理)
计算机科学
过程(计算)
控制系统
人工智能
余热
机械工程
热交换器
电气工程
操作系统
数学分析
古生物学
生物
数学
农学
作者
Runze Lin,Yangyang Luo,Xialai Wu,Junghui Chen,Biao Huang,Hongye Su,Lihua Xie
标识
DOI:10.1016/j.apenergy.2023.122310
摘要
The Organic Rankine Cycle (ORC) is widely used in industrial waste heat recovery due to its simple structure and easy maintenance. However, in the context of smart manufacturing in the process industry, traditional model-based optimization control methods are unable to adapt to the varying operating conditions of the ORC system or sudden changes in operating modes. Deep reinforcement learning (DRL) has significant advantages in situations with uncertainty as it directly achieves control objectives by interacting with the environment without requiring an explicit model of the controlled plant. Nevertheless, direct application of DRL to physical ORC systems presents unacceptable safety risks, and its generalization performance under model-plant mismatch is insufficient to support ORC control requirements. Therefore, this paper proposes a Sim2Real transfer learning-based DRL control method for ORC superheat control, which aims to provide a new simple, feasible, and user-friendly solution for energy system optimization control. Experimental results show that the proposed method greatly improves the training speed of DRL in ORC control problems and solves the generalization performance issue of the agent under multiple operating conditions through Sim2Real transfer.
科研通智能强力驱动
Strongly Powered by AbleSci AI