共轭梯度法
随机梯度下降算法
趋同(经济学)
行搜索
算法
计算机科学
分歧(语言学)
方差减少
收敛速度
随机优化
理论(学习稳定性)
比例(比率)
数学优化
数学
人工智能
机器学习
钥匙(锁)
人工神经网络
统计
半径
经济
哲学
物理
量子力学
经济增长
语言学
计算机安全
蒙特卡罗方法
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-06-07
卷期号:35 (10): 14645-14658
标识
DOI:10.1109/tnnls.2023.3280826
摘要
Conjugate gradient (CG), as an effective technique to speed up gradient descent algorithms, has shown great potential and has widely been used for large-scale machine-learning problems. However, CG and its variants have not been devised for the stochastic setting, which makes them extremely unstable, and even leads to divergence when using noisy gradients. This article develops a novel class of stable stochastic CG (SCG) algorithms with a faster convergence rate via the variance-reduced technique and an adaptive step size rule in the mini-batch setting. Actually, replacing the use of a line search in the CG-type approaches which is time-consuming, or even fails for SCG, this article considers using the random stabilized Barzilai–Borwein (RSBB) method to obtain an online step size. We rigorously analyze the convergence properties of the proposed algorithms and show that the proposed algorithms attain a linear convergence rate for both the strongly convex and nonconvex settings. Also, we show that the total complexity of the proposed algorithms matches that of modern stochastic optimization algorithms under different cases. Scores of numerical experiments on machine-learning problems demonstrate that the proposed algorithms outperform state-of-the-art stochastic optimization algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI