计算机科学
趋同(经济学)
人工神经网络
Boosting(机器学习)
人工智能
机器学习
深层神经网络
算法
简单(哲学)
反向传播
数学优化
数学
哲学
认识论
经济
经济增长
标识
DOI:10.1007/978-3-031-33374-3_26
摘要
In this paper, we introduce weight prediction into the AdamW optimizer to boost its convergence when training the deep neural network (DNN) models. In particular, ahead of each mini-batch training, we predict the future weights according to the update rule of AdamW and then apply the predicted future weights to do both forward pass and backward propagation. In this way, the AdamW optimizer always utilizes the gradients w.r.t. the future weights instead of current weights to update the DNN parameters, making the AdamW optimizer achieve better convergence. Our proposal is simple and straightforward to implement but effective in boosting the convergence of DNN training. We performed extensive experimental evaluations on image classification and language modeling tasks to verify the effectiveness of our proposal. The experimental results validate that our proposal can boost the convergence of AdamW and achieve better accuracy than AdamW when training the DNN models.
科研通智能强力驱动
Strongly Powered by AbleSci AI