正规化(语言学)
随机梯度下降算法
计算机科学
解耦(概率)
算法
源代码
应用数学
人工智能
数学
人工神经网络
操作系统
工程类
控制工程
作者
Ilya Loshchilov,Frank Hutter
出处
期刊:Cornell University - arXiv
日期:2017-11-14
被引量:511
摘要
L$_2$ regularization and regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it weight decay in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of regularization by \emph{decoupling} the from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at this https URL
科研通智能强力驱动
Strongly Powered by AbleSci AI