朗之万动力
随机梯度下降算法
过度拟合
曲率
计算机科学
趋同(经济学)
人工神经网络
雅可比矩阵与行列式
应用数学
数学
数学优化
人工智能
几何学
统计
经济
经济增长
作者
Chunyuan Li,Changyou Chen,David Carlson,Lawrence Carin
出处
期刊:Cornell University - arXiv
日期:2015-01-01
被引量:113
标识
DOI:10.48550/arxiv.1512.07666
摘要
Effective training of deep neural networks suffers from two main issues. The first is that the parameter spaces of these models exhibit pathological curvature. Recent methods address this problem by using adaptive preconditioning for Stochastic Gradient Descent (SGD). These methods improve convergence by adapting to the local geometry of parameter space. A second issue is overfitting, which is typically addressed by early stopping. However, recent work has demonstrated that Bayesian model averaging mitigates this problem. The posterior can be sampled by using Stochastic Gradient Langevin Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD methods inefficient. Here, we propose combining adaptive preconditioners with SGLD. In support of this idea, we give theoretical properties on asymptotic convergence and predictive risk. We also provide empirical results for Logistic Regression, Feedforward Neural Nets, and Convolutional Neural Nets, demonstrating that our preconditioned SGLD method gives state-of-the-art performance on these models.
科研通智能强力驱动
Strongly Powered by AbleSci AI