网格
湍流
动量(技术分析)
强迫(数学)
强化学习
涡流
气泡
计算机科学
流量(数学)
流量控制(数据)
还原(数学)
控制(管理)
机械
控制理论(社会学)
人工智能
物理
数学
并行计算
几何学
计算机网络
财务
大气科学
经济
作者
Bernat Font,Francisco Alcántara-Ávila,Jean Rabault,Ricardo Vinuesa,O. Lehmkuhl
标识
DOI:10.1038/s41467-025-56408-6
摘要
Abstract The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines.
科研通智能强力驱动
Strongly Powered by AbleSci AI