计算机科学
任务(项目管理)
自然语言处理
一般化
班级(哲学)
阅读(过程)
领域(数学分析)
实体链接
命名实体识别
人工智能
理解力
阅读理解
机器学习
语言学
程序设计语言
数学
知识库
哲学
经济
管理
数学分析
作者
Yu Zhang,Jian Deng,Ying Ma,Jianmin Li
标识
DOI:10.2991/978-94-6463-040-4_73
摘要
Named Entity Recognition (NER) is a basic NLP task that aims to provide class labels for words in free text, such as people, locations.The traditional NER task is treated as a label-sequence task, but the trend of recent jobs is to convert the NER task into a Machine Reading Comprehension (MRC) task to achieve better representation.However, this conversion is often accompanied by the problem of poor generalization of the model due to too few class-specific instances.Therefore, to solve this problem, we try to introduce different domain knowledge into our NER task, and we introduce MRC knowledge as well as NLI knowledge into our NER task through a multi-task learning approach.Our method is a one-stage model that combines two large NLI datasets (MNLI, SNLI) and a large traditional MRC dataset (SquAD) with our target NER dataset for multi-task learning.Through multi task learning, we learn the knowledge of NLI domain and MRC domain, so as to improve the performance of our model on the target dataset.We conducted enough experiments to validate the effectiveness of our method.Also, our model achieves 0.3% and 0.106% improvement compared to different baselines, respectively, proving that introducing external knowledge is effective in improving model performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI