静止点
趋同(经济学)
常量(计算机编程)
数学
数学优化
凸函数
随机梯度下降算法
方差减少
正多边形
凸优化
近端梯度法
收敛速度
随机优化
最优化问题
应用数学
计算机科学
人工智能
数学分析
频道(广播)
计算机网络
几何学
统计
经济
人工神经网络
程序设计语言
经济增长
蒙特卡罗方法
作者
Sashank J. Reddi,Suvrit Sra,Barnabás Póczos,Alexander J. Smola
出处
期刊:Neural Information Processing Systems
日期:2016-12-05
卷期号:29: 1145-1153
被引量:113
摘要
We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works.
科研通智能强力驱动
Strongly Powered by AbleSci AI