Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations

医学 神经外科 梅德林 医学物理学 放射科 政治学 法学
作者
Rohaid Ali,Oliver Y. Tang,Ian D. Connolly,Patricia L. Zadnik Sullivan,John H. Shin,Jared S. Fridley,Wael F. Asaad,Deus Cielo,Adetokunbo A. Oyelese,Curtis E. Doberstein,Ziya L. Gokaslan,Albert E. Telfeian
出处
期刊:Neurosurgery [Oxford University Press]
被引量:143
标识
DOI:10.1227/neu.0000000000002632
摘要

Interest surrounding generative large language models (LLMs) has rapidly grown. Although ChatGPT (GPT-3.5), a general LLM, has shown near-passing performance on medical student board examinations, the performance of ChatGPT or its successor GPT-4 on specialized examinations and the factors affecting accuracy remain unclear. This study aims to assess the performance of ChatGPT and GPT-4 on a 500-question mock neurosurgical written board examination. The Self-Assessment Neurosurgery Examinations (SANS) American Board of Neurological Surgery Self-Assessment Examination 1 was used to evaluate ChatGPT and GPT-4. Questions were in single best answer, multiple-choice format. χ 2 , Fisher exact, and univariable logistic regression tests were used to assess performance differences in relation to question characteristics. ChatGPT (GPT-3.5) and GPT-4 achieved scores of 73.4% (95% CI: 69.3%-77.2%) and 83.4% (95% CI: 79.8%-86.5%), respectively, relative to the user average of 72.8% (95% CI: 68.6%-76.6%). Both LLMs exceeded last year's passing threshold of 69%. Although scores between ChatGPT and question bank users were equivalent ( P = .963), GPT-4 outperformed both (both P < .001). GPT-4 answered every question answered correctly by ChatGPT and 37.6% (50/133) of remaining incorrect questions correctly. Among 12 question categories, GPT-4 significantly outperformed users in each but performed comparably with ChatGPT in 3 (functional, other general, and spine) and outperformed both users and ChatGPT for tumor questions. Increased word count (odds ratio = 0.89 of answering a question correctly per +10 words) and higher-order problem-solving (odds ratio = 0.40, P = .009) were associated with lower accuracy for ChatGPT, but not for GPT-4 (both P > .005). Multimodal input was not available at the time of this study; hence, on questions with image content, ChatGPT and GPT-4 answered 49.5% and 56.8% of questions correctly based on contextual context clues alone. LLMs achieved passing scores on a mock 500-question neurosurgical written board examination, with GPT-4 significantly outperforming ChatGPT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
木子完成签到,获得积分10
刚刚
小房子完成签到,获得积分10
2秒前
Nolan完成签到,获得积分10
2秒前
贪玩板栗发布了新的文献求助10
2秒前
4秒前
5秒前
甜甜的平蓝完成签到,获得积分10
6秒前
7秒前
7秒前
潇洒飞丹完成签到,获得积分10
8秒前
10秒前
11秒前
11秒前
Baywreath完成签到,获得积分10
12秒前
竹筏过海应助Lei采纳,获得30
12秒前
马皓发布了新的文献求助10
12秒前
13秒前
田字格发布了新的文献求助10
14秒前
北极星发布了新的文献求助10
15秒前
16秒前
南原给南原的求助进行了留言
16秒前
17秒前
Wenjian7761完成签到,获得积分10
17秒前
缪缪发布了新的文献求助10
19秒前
老实的石头完成签到,获得积分10
19秒前
小吴同学发布了新的文献求助10
20秒前
20秒前
量子星尘发布了新的文献求助10
22秒前
腼腆的若雁完成签到,获得积分10
23秒前
23秒前
fuiee发布了新的文献求助10
23秒前
小开心完成签到,获得积分10
23秒前
北极星完成签到,获得积分10
24秒前
cccc完成签到 ,获得积分10
24秒前
25秒前
Dogged完成签到 ,获得积分10
26秒前
耶啵耶啵完成签到 ,获得积分10
27秒前
mentality完成签到,获得积分10
27秒前
27秒前
27秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Binary Alloy Phase Diagrams, 2nd Edition 6000
Encyclopedia of Reproduction Third Edition 3000
Comprehensive Methanol Science Production, Applications, and Emerging Technologies 2000
化妆品原料学 1000
The Political Psychology of Citizens in Rising China 800
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5637867
求助须知:如何正确求助?哪些是违规求助? 4744182
关于积分的说明 15000410
捐赠科研通 4796064
什么是DOI,文献DOI怎么找? 2562285
邀请新用户注册赠送积分活动 1521829
关于科研通互助平台的介绍 1481714