局部最优
自适应光学
梯度下降
波前
趋同(经济学)
控制理论(社会学)
算法
随机梯度下降算法
计算机科学
执行机构
数学
数学优化
人工智能
人工神经网络
物理
光学
经济增长
经济
控制(管理)
作者
Fang Zhou,Xiangxiang Xu,Xin Li,Huizhen Yang,Chenglong Gong
出处
期刊:AOPC 2020: Optical Sensing and Imaging Technology
日期:2020-11-05
卷期号:163650: 174-174
被引量:3
摘要
The SPGD (stochastic parallel gradient descent) is a control algorithm widely used in WFSless (wavefront sensorless) AO (adaptive optics) system.The gain is commonly set to a fixed value in the traditional SPGD algorithm.With the increase of the number of DM (deformable mirror) actuators, the optimization space of the algorithm becomes larger, which can easily lead to the slow convergence speed of the algorithm and the rise of the probability of falling into the local optima. Adam(Adaptive Moment Estimation) optimizer is an optimized stochastic gradient descent algorithm commonly used in deep learning with the advantage of achieving adaptive gain. Wavefront aberrations under different turbulence strength as correction objects, WFSless AO systems are built with 32, 61, 97 and 127 elements DM as wavefront corrector respectively. Results show that the optimized algorithm can converge faster than basic SPGD and the probability of falling into local optima decreases. The system's convergence speed is increased by about 30%. The probability of falling into local optima is decreased by 29.8% , 30.3% , 32.6% and 35.9% respectively under D/r0=5 and by 28.8% , 30.5% , 33.3% and 34.5% respectively under D/r0=15. The advantages of the optimized algorithm are more obvious as the number of DM actuators increases. Above results provide a theoretical basis for the practical application of the SPGD algorithm based on Adam optimization.
科研通智能强力驱动
Strongly Powered by AbleSci AI