计算机科学
知识库
规范化(社会学)
人工智能
原始数据
任务(项目管理)
水准点(测量)
知识抽取
机器学习
数据科学
数据挖掘
情报检索
社会学
人类学
经济
管理
程序设计语言
地理
大地测量学
作者
Songhua Yang,Chenghao Zhang,Hongfei Xu,Yuxiang Jia
出处
期刊:Cornell University - arXiv
日期:2023-08-23
标识
DOI:10.48550/arxiv.2308.12025
摘要
The Biomedical Entity Normalization (BEN) task aims to align raw, unstructured medical entities to standard entities, thus promoting data coherence and facilitating better downstream medical applications. Recently, prompt learning methods have shown promising results in this task. However, existing research falls short in tackling the more complex Chinese BEN task, especially in the few-shot scenario with limited medical data, and the vast potential of the external medical knowledge base has yet to be fully harnessed. To address these challenges, we propose a novel Knowledge-injected Prompt Learning (PL-Knowledge) method. Specifically, our approach consists of five stages: candidate entity matching, knowledge extraction, knowledge encoding, knowledge injection, and prediction output. By effectively encoding the knowledge items contained in medical entities and incorporating them into our tailor-made knowledge-injected templates, the additional knowledge enhances the model's ability to capture latent relationships between medical entities, thus achieving a better match with the standard entities. We extensively evaluate our model on a benchmark dataset in both few-shot and full-scale scenarios. Our method outperforms existing baselines, with an average accuracy boost of 12.96\% in few-shot and 0.94\% in full-data cases, showcasing its excellence in the BEN task.
科研通智能强力驱动
Strongly Powered by AbleSci AI