随机梯度下降算法
趋同(经济学)
计算机科学
随机优化
动量(技术分析)
数学优化
收敛速度
随机逼近
下降(航空)
弹道
梯度下降
人工智能
数学
人工神经网络
钥匙(锁)
财务
计算机安全
经济
航空航天工程
工程类
物理
天文
经济增长
作者
Ruinan Jin,Xingkang He
标识
DOI:10.1109/icca51439.2020.9264458
摘要
With the rapid increase of data amount in many fields, such as machine learning and networked systems, optimization-based methods inevitably confront the computational issues, which can be well dealt by the stochastic optimization strategies. As one of the most fundamental stochastic optimization algorithms, stochastic gradient descent (SGD) has been intensively developed and employed in the machine learning in the past decade. But unfortunately, due to the technical difficulties, other SGD based algorithms which could achieve better performance, such as momentum-based SGD (mSGD), still lack theoretical basis. Based on this fact, in this paper, we prove that the mSGD algorithm is almost surely convergent at each trajectory. The convergence rate of mSGD is also analyzed.
科研通智能强力驱动
Strongly Powered by AbleSci AI