计算机科学
利用
关系抽取
任务(项目管理)
关系(数据库)
人工智能
集合(抽象数据类型)
过程(计算)
机器学习
约束(计算机辅助设计)
资源(消歧)
信息抽取
自然语言处理
数据挖掘
程序设计语言
工程类
机械工程
经济
计算机安全
管理
计算机网络
作者
Yang Chen,Bowen Shi,Ke Xu
标识
DOI:10.1016/j.ins.2023.120060
摘要
Tremendous progress has been made in the development of fine-tuned pretrained language models (PLMs) that achieve outstanding results on almost all natural language processing (NLP) tasks. Further stimulation of rich knowledge distribution within PLMs can be achieved through additional prompts for fine-tuning, namely, prompt tuning. Generally, prompt engineering involves prompt template engineering, which is the process of searching for an appropriate template for a specific task, and answer engineering, whose objective is to seek an answer space and map it to the original task label set. Existing prompt-based methods are primarily designed manually and search for appropriate verbalization in a discrete answer space, which is insufficient and always results in suboptimal performance for complex NLP tasks such as relation extraction (RE). Therefore, we propose a novel prompt-tuning method with a continuous answer search for RE, which enables the model to find optimal answer word representations in a continuous space through gradient descent and thus fully exploit the relation semantics. To further exploit entity-type information and integrate structured knowledge into our approach, we designed and added an additional TransH-based structured knowledge constraint to the optimization procedure. We conducted comprehensive experiments on four RE benchmarks to evaluate the effectiveness of the proposed approach. The experimental results show that our approach achieves competitive or superior performance without manual answer engineering compared to existing baselines under both fully supervised and low-resource scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI