峡谷
数学
趋同(经济学)
应用数学
数学优化
收敛速度
外推法
反之亦然
凸函数
离散化
正多边形
梯度法
边界(拓扑)
数学分析
几何学
钥匙(锁)
计算机科学
计算机安全
考古
数据库
经济
历史
经济增长
作者
Hédy Attouch,Jalal Fadili
出处
期刊:Siam Journal on Optimization
[Society for Industrial and Applied Mathematics]
日期:2022-08-16
卷期号:32 (3): 2074-2101
被引量:13
摘要
We revisit the Ravine method of Gelfand and Tsetlin from a dynamical system perspective, study its convergence properties, and highlight its similarities and differences with the Nesterov accelerated gradient method. The two methods are closely related. They can be deduced from each other by reversing the order of the extrapolation and gradient operations in their definitions. They benefit from similar fast convergence of values and convergence of iterates for general convex objective functions. We will also establish the high resolution ODE of the Ravine and Nesterov methods and reveal an additional geometric damping term driven by the Hessian for both methods. This will allow us to prove fast convergence toward zero of the gradients not only for the Ravine method but also for the Nesterov method for the first time. In the strongly convex case, we show linear convergence for the Ravine method at an optimal rate. We also highlight connections to other algorithms resulting from more subtle discretization schemes and finally describe a Ravine version of the proximal-gradient algorithms for general structured smooth + nonsmooth convex optimization problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI