桥接(联网)
计算机科学
特征(语言学)
算法
人工智能
机器学习
计算机网络
语言学
哲学
作者
Jingrun Chen,Xurong Chi,E Weinan,Zhouwang Yang
摘要
Stochastic gradient Langevin dynamics (SGLD) is a standard sampling technique for uncertainty estimation in Bayesian neural networks.Past methods have shown improved convergence by including a preconditioning of SGLD based on RMSprop.This preconditioning serves to adapt to the local geometry of the parameter space and improve the performance of deep neural networks.In this paper, we develop another preconditioning technique to accelerate training and improve convergence by incorporating a recently developed batch normalization preconditioning (BNP), into our methods.BNP uses mini-batch statistics to improve the conditioning of the Hessian of the loss function in traditional neural networks and thus improve convergence.We will show that applying BNP to SGLD will improve the conditioning of the Fisher information matrix, which improves the convergence.We present the results of this method on three experiments including a simulation example, a contextual bandit example, and a residual network which show the improved initial convergence provided by BNP, in addition to an improved condition number from this method.
科研通智能强力驱动
Strongly Powered by AbleSci AI