BioBERT: a pre-trained biomedical language representation model for biomedical text mining

生物医学文本挖掘 计算机科学 人工智能 自然语言处理 语言模型 命名实体识别 关系抽取 文本挖掘 语料库 领域(数学分析) F1得分 源代码 代表(政治) 信息抽取 任务(项目管理) 法学 管理 经济 数学分析 操作系统 政治 数学 政治学
作者
Jinhyuk Lee,Wonjin Yoon,Sungdong Kim,Donghyeon Kim,Sunkyu Kim,Chan Ho So,Jaewoo Kang
出处
期刊:Bioinformatics [Oxford University Press]
卷期号:36 (4): 1234-1240 被引量:5078
标识
DOI:10.1093/bioinformatics/btz682
摘要

Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
大胆夜天完成签到,获得积分10
刚刚
皮崇知发布了新的文献求助10
刚刚
壹元侑子完成签到,获得积分10
1秒前
阿冰发布了新的文献求助10
1秒前
李健应助晴晴采纳,获得10
1秒前
Xx发布了新的文献求助10
3秒前
4秒前
5秒前
善学以致用应助wind采纳,获得10
5秒前
木子完成签到 ,获得积分10
7秒前
9秒前
摆烂完成签到,获得积分10
10秒前
彭于彦祖应助壹元侑子采纳,获得20
10秒前
晴晴完成签到,获得积分10
11秒前
英俊的铭应助阿冰采纳,获得10
12秒前
欣欣发布了新的文献求助10
12秒前
liwenqiang完成签到,获得积分10
12秒前
15秒前
15秒前
SYLH应助甘愿采纳,获得10
15秒前
竹纤维完成签到 ,获得积分10
17秒前
小权拳的权完成签到,获得积分10
17秒前
18秒前
18秒前
嘉博学长完成签到,获得积分10
18秒前
hellokitty发布了新的文献求助10
19秒前
rayzhanghl完成签到,获得积分10
19秒前
霸气的小兔子完成签到,获得积分10
22秒前
田様应助qq糖采纳,获得30
22秒前
Rjy发布了新的文献求助10
22秒前
22秒前
科研通AI5应助科研兄采纳,获得10
23秒前
多情的灵安完成签到,获得积分10
23秒前
ting发布了新的文献求助10
24秒前
24秒前
优雅的沛春完成签到 ,获得积分10
28秒前
星辰大海应助漂亮的从蕾采纳,获得10
29秒前
路客发布了新的文献求助10
29秒前
晨光中完成签到,获得积分10
30秒前
30秒前
高分求助中
IZELTABART TAPATANSINE 500
Where and how to use plate heat exchangers 450
Seven new species of the Palaearctic Lauxaniidae and Asteiidae (Diptera) 400
Handbook of Laboratory Animal Science 300
Fundamentals of Medical Device Regulations, Fifth Edition(e-book) 300
Not Equal : Towards an International Law of Finance 260
Beginners Guide To Clinical Medicine (Pb 2020): A Systematic Guide To Clinical Medicine, Two-Vol Set 250
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3710998
求助须知:如何正确求助?哪些是违规求助? 3259744
关于积分的说明 9910270
捐赠科研通 2972912
什么是DOI,文献DOI怎么找? 1630226
邀请新用户注册赠送积分活动 773222
科研通“疑难数据库(出版商)”最低求助积分说明 744225