Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions

医学 医学物理学 放射科 人工智能 培训(气象学) 计算机视觉 气象学 计算机科学 物理
作者
Nolan Hayden,Spencer Gilbert,Laila Poisson,Brent Griffith,Chad Klochko,Shannyn Wolfe
出处
期刊:Radiology [Radiological Society of North America]
卷期号:312 (3) 被引量:1
标识
DOI:10.1148/radiol.240153
摘要

Background Recent advancements, including image processing capabilities, present new potential applications of large language models such as ChatGPT (OpenAI), a generative pretrained transformer, in radiology. However, baseline performance of ChatGPT in radiology-related tasks is understudied. Purpose To evaluate the performance of GPT-4 with vision (GPT-4V) on radiology in-training examination questions, including those with images, to gauge the model's baseline knowledge in radiology. Materials and Methods In this prospective study, conducted between September 2023 and March 2024, the September 2023 release of GPT-4V was assessed using 386 retired questions (189 image-based and 197 text-only questions) from the American College of Radiology Diagnostic Radiology In-Training Examinations. Nine question pairs were identified as duplicates; only the first instance of each duplicate was considered in ChatGPT's assessment. A subanalysis assessed the impact of different zero-shot prompts on performance. Statistical analysis included χ2 tests of independence to ascertain whether the performance of GPT-4V varied between question types or subspecialty. The McNemar test was used to evaluate performance differences between the prompts, with Benjamin-Hochberg adjustment of the P values conducted to control the false discovery rate (FDR). A P value threshold of less than.05 denoted statistical significance. Results GPT-4V correctly answered 246 (65.3%) of the 377 unique questions, with significantly higher accuracy on text-only questions (81.5%, 159 of 195) than on image-based questions (47.8%, 87 of 182) (χ2 test, P < .001). Subanalysis revealed differences between prompts on text-based questions, where chain-of-thought prompting outperformed long instruction by 6.1% (McNemar, P = .02; FDR = 0.063), basic prompting by 6.8% (P = .009, FDR = 0.044), and the original prompting style by 8.9% (P = .001, FDR = 0.014). No differences were observed between prompts on image-based questions with P values of .27 to >.99. Conclusion While GPT-4V demonstrated a level of competence in text-based questions, it showed deficits interpreting radiologic images. © RSNA, 2024 See also the editorial by Deng in this issue.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小益博士完成签到,获得积分20
1秒前
bububusbu发布了新的文献求助10
2秒前
3秒前
漂亮土豆完成签到,获得积分10
4秒前
6秒前
6秒前
俭朴白猫完成签到,获得积分10
6秒前
8秒前
君君关注了科研通微信公众号
9秒前
xiaozhou发布了新的文献求助10
9秒前
10秒前
13秒前
白鸽鸽完成签到,获得积分10
13秒前
13秒前
MYunn发布了新的文献求助10
16秒前
16秒前
17秒前
孙太阳发布了新的文献求助10
19秒前
20秒前
机智傀斗完成签到 ,获得积分10
22秒前
acceleactor发布了新的文献求助10
23秒前
隔壁老王发布了新的文献求助10
23秒前
易槐发布了新的文献求助10
23秒前
Osteon完成签到,获得积分10
26秒前
小蘑菇应助孙太阳采纳,获得10
27秒前
28秒前
MYunn完成签到,获得积分0
28秒前
29秒前
慕青应助科研通管家采纳,获得10
29秒前
小马甲应助科研通管家采纳,获得10
29秒前
所所应助科研通管家采纳,获得10
30秒前
orixero应助科研通管家采纳,获得30
30秒前
思源应助科研通管家采纳,获得10
30秒前
隐形曼青应助科研通管家采纳,获得10
30秒前
小二郎应助科研通管家采纳,获得10
30秒前
NIKI完成签到 ,获得积分10
30秒前
俊逸的难破完成签到,获得积分10
31秒前
微微完成签到 ,获得积分10
32秒前
lishi发布了新的文献求助10
32秒前
TL发布了新的文献求助10
32秒前
高分求助中
LNG地下式貯槽指針(JGA指-107) 1000
LNG地上式貯槽指針 (JGA指 ; 108) 1000
Preparation and Characterization of Five Amino-Modified Hyper-Crosslinked Polymers and Performance Evaluation for Aged Transformer Oil Reclamation 700
Operative Techniques in Pediatric Orthopaedic Surgery 510
How Stories Change Us A Developmental Science of Stories from Fiction and Real Life 500
九经直音韵母研究 500
Full waveform acoustic data processing 500
热门求助领域 (近24小时)
化学 医学 材料科学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 免疫学 细胞生物学 电极
热门帖子
关注 科研通微信公众号,转发送积分 2930881
求助须知:如何正确求助?哪些是违规求助? 2582954
关于积分的说明 6965394
捐赠科研通 2231349
什么是DOI,文献DOI怎么找? 1185287
版权声明 589595
科研通“疑难数据库(出版商)”最低求助积分说明 580271