任务(项目管理)
计算机科学
图层(电子)
人工智能
序列(生物学)
多任务学习
基线(sea)
领域(数学分析)
知识共享
自然语言处理
机器学习
数学
知识管理
化学
海洋学
管理
有机化学
生物
数学分析
经济
遗传学
地质学
作者
Niraj Pahari,Kazutaka Shimada
标识
DOI:10.1109/scisisis55246.2022.10001943
摘要
The ability to learn from one task and pass the knowledge to another task is a remarkable job that is possible because of multi-task learning (MTL). It is helpful in the scenario where data is scarce. In this paper, we study the use of MTL in natural language processing (NLP) using BERT for sentiment classification, category classification, and aspect-opinion sequence classification. The Restaurant-ACOS dataset is split into two subsets, namely the sentiment-category data subset, for the first two tasks and the sequence classification data subset, for the third task. The BERT layers are grouped into top, middle, and bottom layers. These layers are either frozen, shared, or kept unshared for individual tasks. Experiments are conducted to find the best combination for the overall layers of BERT. The experiments demonstrate that MTL can improve the performance on these tasks when compared as a baseline method that uses single task learning. One of the best configurations was based on freezing on the bottom layers, sharing on the middle layers, and keeping on the top layers, respectively. In addition, we investigate the role of each layer via the size of layers. From the result, the individual layers capture the task-specific patterns whereas shared layers capture the domain knowledge.
科研通智能强力驱动
Strongly Powered by AbleSci AI