强化学习
斯太尔率
积分器
计算机科学
Python(编程语言)
适应性
自适应光学
自适应控制
控制理论(社会学)
人工智能
控制(管理)
物理
计算机网络
生态学
带宽(计算)
天文
生物
操作系统
作者
Raissa Camelo,Jalo Nousiainen,Cédric Taïssir Heritier,Morgan Gray,Benoît Neichel
摘要
Predictive control laws for Adaptive Optics (AO) using Artificial Intelligence has been recently explored as an alternative to the classic methods, such as the integrator law. Reinforcement Learning excels in predictive control tasks by enabling systems to learn optimal control strategies through continuous interaction with their environment, adapting to dynamic conditions and achieving effective decision-making in real-time. In our previous work, a Model-based Reinforcement Learning (MBRL) method called Policy Optimization for Adaptive Optics (PO4AO) was used in conjunction with the Object-Oriented Python Adaptive Optics (OOPAO) to simulate the Provence Adaptive Optics Pyramid Run System (PAPYRUS) optical bench. PO4AO demonstrated high adaptability to turbulence and rapid convergence, achieving optimal corrections after just 500 frames of interaction, outperforming a simulated integrator in different atmospheric conditions. Building upon this, our current study explored PO4AO's capability to adapt to sudden atmospheric changes by worsening turbulence conditions during evaluation, notably the wind speed and the seeing. In the result's section, we compare PO4AO's performance in terms of Strehl Ratio (SR) to the integrator. Further description of the experiments are present in the paper.
科研通智能强力驱动
Strongly Powered by AbleSci AI