Comparison of ChatGPT–3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations

医学 骨科手术 子群分析 口译(哲学) 医学物理学 外科 内科学 荟萃分析 语言学 哲学
作者
Patrick A. Massey,Carver Montgomery,Andrew S. Zhang
标识
DOI:10.5435/jaaos-d-23-00396
摘要

Introduction: Artificial intelligence (AI) programs have the ability to answer complex queries including medical profession examination questions. The purpose of this study was to compare the performance of orthopaedic residents (ortho residents) against Chat Generative Pretrained Transformer (ChatGPT)-3.5 and GPT-4 on orthopaedic assessment examinations. A secondary objective was to perform a subgroup analysis comparing the performance of each group on questions that included image interpretation versus text-only questions. Methods: The ResStudy orthopaedic examination question bank was used as the primary source of questions. One hundred eighty questions and answer choices from nine different orthopaedic subspecialties were directly input into ChatGPT-3.5 and then GPT-4. ChatGPT did not have consistently available image interpretation, so no images were directly provided to either AI format. Answers were recorded as correct versus incorrect by the chatbot, and resident performance was recorded based on user data provided by ResStudy. Results: Overall, ChatGPT-3.5, GPT-4, and ortho residents scored 29.4%, 47.2%, and 74.2%, respectively. There was a difference among the three groups in testing success, with ortho residents scoring higher than ChatGPT-3.5 and GPT-4 ( P < 0.001 and P < 0.001). GPT-4 scored higher than ChatGPT-3.5 ( P = 0.002). A subgroup analysis was performed by dividing questions into question stems without images and question stems with images. ChatGPT-3.5 was more correct (37.8% vs. 22.4%, respectively, OR = 2.1, P = 0.033) and ChatGPT-4 was also more correct (61.0% vs. 35.7%, OR = 2.8, P < 0.001), when comparing text-only questions versus questions with images. Residents were 72.6% versus 75.5% correct with text-only questions versus questions with images, with no significant difference ( P = 0.302). Conclusion: Orthopaedic residents were able to answer more questions accurately than ChatGPT-3.5 and GPT-4 on orthopaedic assessment examinations. GPT-4 is superior to ChatGPT-3.5 for answering orthopaedic resident assessment examination questions. Both ChatGPT-3.5 and GPT-4 performed better on text-only questions than questions with images. It is unlikely that GPT-4 or ChatGPT-3.5 would pass the American Board of Orthopaedic Surgery written examination.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
深情安青应助烽火残心采纳,获得10
刚刚
100关闭了100文献求助
1秒前
忧郁的以云完成签到,获得积分10
2秒前
2秒前
2秒前
华仔应助顾子墨采纳,获得10
3秒前
Amita发布了新的文献求助10
3秒前
王羲之发布了新的文献求助10
3秒前
4秒前
5秒前
jziyan发布了新的文献求助10
6秒前
wanci应助Tong采纳,获得30
7秒前
7秒前
hs完成签到,获得积分0
7秒前
周游完成签到,获得积分10
8秒前
科研通AI6.2应助OhoOu采纳,获得10
8秒前
ryan1300完成签到 ,获得积分10
9秒前
科目三应助小康找文献采纳,获得10
9秒前
wyd222发布了新的文献求助10
10秒前
婧一发布了新的文献求助10
10秒前
Wenyilong发布了新的文献求助10
11秒前
12秒前
13秒前
YONG发布了新的文献求助10
13秒前
完美世界应助bswxy采纳,获得10
15秒前
SciGPT应助神勇的天问采纳,获得10
16秒前
16秒前
慢慢来完成签到 ,获得积分10
18秒前
18秒前
科研通AI6.2应助彼岸花开采纳,获得10
18秒前
19秒前
烽火残心发布了新的文献求助10
19秒前
顾矜应助张莹采纳,获得10
19秒前
19秒前
ata完成签到,获得积分20
21秒前
21秒前
21秒前
梦幻发布了新的文献求助10
21秒前
科研通AI6.2应助黎敏采纳,获得10
22秒前
22秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Kinesiophobia : a new view of chronic pain behavior 2000
The Social Psychology of Citizenship 1000
Streptostylie bei Dinosauriern nebst Bemerkungen über die 540
Signals, Systems, and Signal Processing 510
Discrete-Time Signals and Systems 510
Brittle Fracture in Welded Ships 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5923534
求助须知:如何正确求助?哪些是违规求助? 6933303
关于积分的说明 15821492
捐赠科研通 5051169
什么是DOI,文献DOI怎么找? 2717633
邀请新用户注册赠送积分活动 1672445
关于科研通互助平台的介绍 1607786