马尔可夫链
数学
分歧(语言学)
数学优化
理论(学习稳定性)
常量(计算机编程)
趋同(经济学)
应用数学
计算机科学
统计
哲学
语言学
机器学习
经济
程序设计语言
经济增长
作者
Alexandre Chotard,Anne Auger,Nikolaus Hansen
摘要
This paper analyzes a (1, $\lambda$)-Evolution Strategy, a randomized comparison-based adaptive search algorithm, optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behaviour of the algorithm. Stability of the chain implies, by applying a law of large numbers, either convergence or divergence of the algorithm. Divergence is the desired behaviour. In the constant step-size case, we show stability of the Markov chain and prove the divergence of the algorithm. In the cumulative step-size adaptation case, we prove stability of the Markov chain in the simplified case where the cumulation parameter equals 1, and discuss steps to obtain similar results for the full (default) algorithm where the cumulation parameter is smaller than 1. The stability of the Markov chain allows us to deduce geometric divergence or convergence , depending on the dimension, constraint angle, population size and damping parameter, at a rate that we estimate. Our results complement previous studies where stability was assumed.
科研通智能强力驱动
Strongly Powered by AbleSci AI