计算机科学
多任务学习
过度拟合
超参数
人工智能
超参数优化
机器学习
规范化(社会学)
人工神经网络
任务(项目管理)
深度学习
支持向量机
人类学
社会学
经济
管理
作者
Chen Zhao,Vijay Badrinarayanan,Chen‐Yu Lee,Andrew Rabinovich
出处
期刊:International Conference on Machine Learning
日期:2018-07-03
卷期号:: 794-803
被引量:347
摘要
Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\alpha$. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI