计算机科学
人工智能
关系抽取
任务(项目管理)
安全性令牌
微调
人工神经网络
机器学习
过程(计算)
代表(政治)
构造(python库)
嵌入
信息抽取
物理
法学
程序设计语言
管理
经济
政治学
操作系统
政治
量子力学
计算机安全
作者
WU Yi-zhao,Yanping Chen,Yongbin Qin,Rui Tang,Qinghua Zheng
标识
DOI:10.1016/j.eswa.2023.123000
摘要
Fine-tuning and mask-tuning (or prompt tuning) are two approaches to construct deep neural networks for entity and relation extraction. Fine-tuning based models optimize neural networks with task-relevant objective, in which pre-trained language models (PLMs) are mainly used as external resources to support word embedding. In mask-tuning models, neural networks is optimized by the same pre-training objective in a PLM, which directly outputs verbalized entity type representations. It is effective to utilize potential knowledge of PLMs. In this paper, we propose a recollect-tuning approach, which jointly makes full use of the mechanisms of both fine-tuning and mask-tuning. In this approach, the recollect-tuning iteratively masks tokens in a possible entity span. The classification is based on both the masked token representation and the entity span representation. It is the same as the process to make a decision based on incomplete information. In the training process, the deep network is optimized by task-relevant objective, which strengthens the semantic representation of each entity span. It is effective to learn entity noise-invariant features and take full advantage of potential knowledge of PLMs. Our method is evaluated on three public benchmarks (the ACE 2004, ACE 2005 and SciERC datasets) for the entity and relation extraction task. The result shows significant improvement in the two tasks, outperforming the state-of-the-art performance on ACE04, ACE05 and SciERC by +0.4%, +0.6%, and +0.5%, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI