Comparison of ChatGPT–3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations

医学 骨科手术 子群分析 口译(哲学) 医学物理学 外科 内科学 荟萃分析 语言学 哲学
作者
Patrick A. Massey,Carver Montgomery,Andrew S. Zhang
标识
DOI:10.5435/jaaos-d-23-00396
摘要

Introduction: Artificial intelligence (AI) programs have the ability to answer complex queries including medical profession examination questions. The purpose of this study was to compare the performance of orthopaedic residents (ortho residents) against Chat Generative Pretrained Transformer (ChatGPT)-3.5 and GPT-4 on orthopaedic assessment examinations. A secondary objective was to perform a subgroup analysis comparing the performance of each group on questions that included image interpretation versus text-only questions. Methods: The ResStudy orthopaedic examination question bank was used as the primary source of questions. One hundred eighty questions and answer choices from nine different orthopaedic subspecialties were directly input into ChatGPT-3.5 and then GPT-4. ChatGPT did not have consistently available image interpretation, so no images were directly provided to either AI format. Answers were recorded as correct versus incorrect by the chatbot, and resident performance was recorded based on user data provided by ResStudy. Results: Overall, ChatGPT-3.5, GPT-4, and ortho residents scored 29.4%, 47.2%, and 74.2%, respectively. There was a difference among the three groups in testing success, with ortho residents scoring higher than ChatGPT-3.5 and GPT-4 ( P < 0.001 and P < 0.001). GPT-4 scored higher than ChatGPT-3.5 ( P = 0.002). A subgroup analysis was performed by dividing questions into question stems without images and question stems with images. ChatGPT-3.5 was more correct (37.8% vs. 22.4%, respectively, OR = 2.1, P = 0.033) and ChatGPT-4 was also more correct (61.0% vs. 35.7%, OR = 2.8, P < 0.001), when comparing text-only questions versus questions with images. Residents were 72.6% versus 75.5% correct with text-only questions versus questions with images, with no significant difference ( P = 0.302). Conclusion: Orthopaedic residents were able to answer more questions accurately than ChatGPT-3.5 and GPT-4 on orthopaedic assessment examinations. GPT-4 is superior to ChatGPT-3.5 for answering orthopaedic resident assessment examination questions. Both ChatGPT-3.5 and GPT-4 performed better on text-only questions than questions with images. It is unlikely that GPT-4 or ChatGPT-3.5 would pass the American Board of Orthopaedic Surgery written examination.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
惕守发布了新的文献求助10
刚刚
1秒前
okkkura发布了新的文献求助10
4秒前
狂野石头发布了新的文献求助30
4秒前
nongdaren完成签到,获得积分10
5秒前
彭于晏应助写文章很快乐采纳,获得10
5秒前
Fn发布了新的文献求助10
5秒前
伊祁夜明完成签到,获得积分10
5秒前
xx发布了新的文献求助10
6秒前
wanci应助科研通管家采纳,获得10
6秒前
6秒前
wanci应助科研通管家采纳,获得10
6秒前
6秒前
6秒前
科研通AI6.1应助木头人采纳,获得100
6秒前
6秒前
yufanhui应助科研通管家采纳,获得10
6秒前
yufanhui应助科研通管家采纳,获得10
7秒前
yufanhui应助科研通管家采纳,获得10
7秒前
7秒前
7秒前
7秒前
脑洞疼应助科研通管家采纳,获得10
7秒前
7秒前
7秒前
哈47应助科研通管家采纳,获得10
7秒前
7秒前
所所应助科研通管家采纳,获得10
7秒前
哈47应助科研通管家采纳,获得10
7秒前
无极微光应助科研通管家采纳,获得20
7秒前
所所应助科研通管家采纳,获得10
7秒前
yufanhui应助科研通管家采纳,获得10
7秒前
无极微光应助科研通管家采纳,获得20
7秒前
拾捌发布了新的文献求助10
7秒前
yufanhui应助科研通管家采纳,获得10
7秒前
Akim应助科研通管家采纳,获得10
7秒前
今后应助科研通管家采纳,获得30
7秒前
无极微光应助科研通管家采纳,获得20
7秒前
bkagyin应助科研通管家采纳,获得10
7秒前
FashionBoy应助科研通管家采纳,获得10
7秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Molecular Biology of Cancer: Mechanisms, Targets, and Therapeutics 3000
Kinesiophobia : a new view of chronic pain behavior 3000
Les Mantodea de guyane 2500
Feldspar inclusion dating of ceramics and burnt stones 1000
What is the Future of Psychotherapy in a Digital Age? 801
The Psychological Quest for Meaning 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5963219
求助须知:如何正确求助?哪些是违规求助? 7222656
关于积分的说明 15965963
捐赠科研通 5099649
什么是DOI,文献DOI怎么找? 2739813
邀请新用户注册赠送积分活动 1702522
关于科研通互助平台的介绍 1619341