Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions

医学 医学物理学 放射科 人工智能 培训(气象学) 计算机视觉 计算机科学 物理 气象学
作者
Nolan Hayden,Spencer Gilbert,Laila Poisson,Brent Griffith,Chad Klochko,Shannyn Wolfe
出处
期刊:Radiology [Radiological Society of North America]
卷期号:312 (3) 被引量:6
标识
DOI:10.1148/radiol.240153
摘要

Background Recent advancements, including image processing capabilities, present new potential applications of large language models such as ChatGPT (OpenAI), a generative pretrained transformer, in radiology. However, baseline performance of ChatGPT in radiology-related tasks is understudied. Purpose To evaluate the performance of GPT-4 with vision (GPT-4V) on radiology in-training examination questions, including those with images, to gauge the model's baseline knowledge in radiology. Materials and Methods In this prospective study, conducted between September 2023 and March 2024, the September 2023 release of GPT-4V was assessed using 386 retired questions (189 image-based and 197 text-only questions) from the American College of Radiology Diagnostic Radiology In-Training Examinations. Nine question pairs were identified as duplicates; only the first instance of each duplicate was considered in ChatGPT's assessment. A subanalysis assessed the impact of different zero-shot prompts on performance. Statistical analysis included χ2 tests of independence to ascertain whether the performance of GPT-4V varied between question types or subspecialty. The McNemar test was used to evaluate performance differences between the prompts, with Benjamin-Hochberg adjustment of the P values conducted to control the false discovery rate (FDR). A P value threshold of less than.05 denoted statistical significance. Results GPT-4V correctly answered 246 (65.3%) of the 377 unique questions, with significantly higher accuracy on text-only questions (81.5%, 159 of 195) than on image-based questions (47.8%, 87 of 182) (χ2 test, P < .001). Subanalysis revealed differences between prompts on text-based questions, where chain-of-thought prompting outperformed long instruction by 6.1% (McNemar, P = .02; FDR = 0.063), basic prompting by 6.8% (P = .009, FDR = 0.044), and the original prompting style by 8.9% (P = .001, FDR = 0.014). No differences were observed between prompts on image-based questions with P values of .27 to >.99. Conclusion While GPT-4V demonstrated a level of competence in text-based questions, it showed deficits interpreting radiologic images. © RSNA, 2024 See also the editorial by Deng in this issue.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
存在发布了新的文献求助30
刚刚
共享精神应助王晰贺采纳,获得10
1秒前
小蘑菇应助liu采纳,获得10
2秒前
LMFY222完成签到,获得积分20
2秒前
顺利墨镜发布了新的文献求助10
2秒前
3秒前
000发布了新的文献求助10
3秒前
3秒前
难过的丹烟完成签到,获得积分10
5秒前
SciGPT应助LuoYR@SZU采纳,获得10
6秒前
6秒前
一米多完成签到,获得积分10
7秒前
7秒前
存在完成签到,获得积分10
7秒前
弥豆子完成签到 ,获得积分10
9秒前
OIC发布了新的文献求助10
9秒前
垃圾桶完成签到 ,获得积分10
10秒前
mhlu7发布了新的文献求助10
12秒前
简单点发布了新的文献求助10
12秒前
13秒前
14秒前
小蘑菇应助小杨采纳,获得10
14秒前
healthy完成签到 ,获得积分10
15秒前
小星星完成签到 ,获得积分10
15秒前
凹凸先森发布了新的文献求助10
15秒前
17秒前
mimi发布了新的文献求助10
18秒前
qin完成签到,获得积分10
19秒前
利好完成签到 ,获得积分10
19秒前
牟弼完成签到,获得积分10
20秒前
20秒前
贾方硕发布了新的文献求助10
20秒前
21秒前
22秒前
小天完成签到,获得积分10
22秒前
23秒前
23秒前
哇哇哇哇发布了新的文献求助30
24秒前
dorothy_meng完成签到,获得积分10
24秒前
毛豆应助kkuula采纳,获得10
24秒前
高分求助中
Востребованный временем 2500
Injection and Compression Molding Fundamentals 1000
Classics in Total Synthesis IV: New Targets, Strategies, Methods 1000
Mantids of the euro-mediterranean area 600
The Oxford Handbook of Educational Psychology 600
Mantodea of the World: Species Catalog Andrew M 500
Insecta 2. Blattodea, Mantodea, Isoptera, Grylloblattodea, Phasmatodea, Dermaptera and Embioptera 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 内科学 物理 纳米技术 计算机科学 基因 遗传学 化学工程 复合材料 免疫学 物理化学 细胞生物学 催化作用 病理
热门帖子
关注 科研通微信公众号,转发送积分 3422301
求助须知:如何正确求助?哪些是违规求助? 3022634
关于积分的说明 8901789
捐赠科研通 2710031
什么是DOI,文献DOI怎么找? 1486283
科研通“疑难数据库(出版商)”最低求助积分说明 686983
邀请新用户注册赠送积分活动 682206