RadBERT: Adapting Transformer-based Language Models to Radiology

自动汇总 自然语言处理 医学 人工智能 语言模型 变压器 编码(社会科学) 编码器 判决 放射科 计算机科学 统计 量子力学 电压 操作系统 物理 数学
作者
An Yan,Julian McAuley,Xing Lü,Jiang Du,Eric Chang,Amilcare Gentili,Chun‐Nan Hsu
出处
期刊:Radiology [Radiological Society of North America]
卷期号:4 (4) 被引量:68
标识
DOI:10.1148/ryai.210258
摘要

To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications.This retrospective study presents a family of bidirectional encoder representations from transformers (BERT)-based language models adapted for radiology, named RadBERT. Transformers were pretrained with either 2.16 or 4.42 million radiology reports from U.S. Department of Veterans Affairs health care systems nationwide on top of four different initializations (BERT-base, Clinical-BERT, robustly optimized BERT pretraining approach [RoBERTa], and BioMed-RoBERTa) to create six variants of RadBERT. Each variant was fine-tuned for three representative NLP tasks in radiology: (a) abnormal sentence classification: models classified sentences in radiology reports as reporting abnormal or normal findings; (b) report coding: models assigned a diagnostic code to a given radiology report for five coding systems; and (c) report summarization: given the findings section of a radiology report, models selected key sentences that summarized the findings. Model performance was compared by bootstrap resampling with five intensively studied transformer language models as baselines: BERT-base, BioBERT, Clinical-BERT, BlueBERT, and BioMed-RoBERTa.For abnormal sentence classification, all models performed well (accuracies above 97.5 and F1 scores above 95.0). RadBERT variants achieved significantly higher scores than corresponding baselines when given only 10% or less of 12 458 annotated training sentences. For report coding, all variants outperformed baselines significantly for all five coding systems. The variant RadBERT-BioMed-RoBERTa performed the best among all models for report summarization, achieving a Recall-Oriented Understudy for Gisting Evaluation-1 score of 16.18 compared with 15.27 by the corresponding baseline (BioMed-RoBERTa, P < .004).Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models.Keywords: Translation, Unsupervised Learning, Transfer Learning, Neural Networks, Informatics Supplemental material is available for this article. © RSNA, 2022See also commentary by Wiggins and Tejani in this issue.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
情怀应助孤单的您采纳,获得20
1秒前
max完成签到 ,获得积分10
2秒前
2秒前
2秒前
清风发布了新的文献求助10
3秒前
leo7完成签到,获得积分10
3秒前
既白完成签到,获得积分10
3秒前
candy发布了新的文献求助10
3秒前
铃兰发布了新的文献求助10
3秒前
anlin完成签到,获得积分10
4秒前
量子星尘发布了新的文献求助30
4秒前
111关闭了111文献求助
4秒前
blank12发布了新的文献求助10
4秒前
4秒前
5秒前
HOME发布了新的文献求助10
5秒前
6秒前
大鱼完成签到 ,获得积分10
6秒前
6秒前
大模型应助xx采纳,获得10
6秒前
翟三日发布了新的文献求助10
6秒前
6秒前
7秒前
852应助zo采纳,获得10
7秒前
量子星尘发布了新的文献求助10
8秒前
饶天源发布了新的文献求助10
8秒前
健忘雨发布了新的文献求助20
9秒前
peiruili发布了新的文献求助10
9秒前
9秒前
zl987发布了新的文献求助10
9秒前
10秒前
10秒前
三心草发布了新的文献求助10
12秒前
雪sung完成签到,获得积分10
12秒前
12秒前
momo发布了新的文献求助10
12秒前
如是之人完成签到,获得积分10
13秒前
DDD发布了新的文献求助10
13秒前
blank12完成签到,获得积分10
14秒前
1111完成签到 ,获得积分10
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
从k到英国情人 1500
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1000
The Composition and Relative Chronology of Dynasties 16 and 17 in Egypt 1000
Russian Foreign Policy: Change and Continuity 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5727744
求助须知:如何正确求助?哪些是违规求助? 5309981
关于积分的说明 15312237
捐赠科研通 4875187
什么是DOI,文献DOI怎么找? 2618600
邀请新用户注册赠送积分活动 1568248
关于科研通互助平台的介绍 1524927