计算机科学
统一医学语言系统
关系抽取
命名实体识别
自然语言处理
领域(数学分析)
水准点(测量)
关系(数据库)
知识库
人工智能
过程(计算)
实体链接
领域知识
语言模型
生物医学文本挖掘
信息抽取
文本挖掘
数据挖掘
任务(项目管理)
程序设计语言
管理
经济
地理
数学分析
数学
大地测量学
标识
DOI:10.1109/bibm55620.2022.9995583
摘要
Pretrained language models have achieved widespread success on various natural language processing tasks. In the biomedical domain, one line of research is to utilize a large amount of in-domain corpus for pre-training.While these models achieved remarkable improvement on in-domain tasks, they do not take into account the positive role of large-scale in-domain knowledge bases. Integrating biomedical knowledge in the knowledge base like the Unified Medical Language System(UMLS) into these models can further benefit in-domain downstream tasks, such as biomedical named entities and relation extraction. To this end, we proposed BioELM, a pre-trained language model based on entity linking that explicitly leverages knowledge from the UMLS knowledge base. We utilize a two-layer entity-linking structure to integrate entity representations. To optimize the pre-training process, we optimized the masked language modeling and added two training objectives as named entity recognition and entity linking. We validate the performance of our BioELM on named entity recognition and relation extraction tasks on the BLURB benchmark. The experimental results demonstrate that the pre-training tasks and entity-linking strategy on BioELM can improve the performance on both biomedical named entity recognition and relation extraction tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI