计算机科学
人工智能
机器学习
深度学习
杠杆(统计)
过度拟合
建筑
自然语言理解
正规化(语言学)
任务(项目管理)
多任务学习
自然语言处理
自然语言
人工神经网络
管理
经济
艺术
视觉艺术
作者
Shijie Chen,Yu Zhang,Qiang Yang
摘要
Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP). However, directly training deep neural models often suffer from overfitting and data scarcity problems that are pervasive in NLP tasks. In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on these tasks, has been used to handle these problems. In this article, we give an overview of the use of MTL in NLP tasks. We first review MTL architectures used in NLP tasks and categorize them into four classes, including parallel architecture, hierarchical architecture, modular architecture, and generative adversarial architecture. Then we present optimization techniques on loss construction, gradient regularization, data sampling, and task scheduling to properly train a multi-task model. After presenting applications of MTL in a variety of NLP tasks, we introduce some benchmark datasets. Finally, we make a conclusion and discuss several possible research directions in this field.
科研通智能强力驱动
Strongly Powered by AbleSci AI