Large Language Model Influence on Diagnostic Reasoning

印为红字的 医学 随机对照试验 梅德林 干预(咨询) 物理疗法 家庭医学 医学物理学 心理学 护理部 病理 政治学 数学教育 法学
作者
Ethan Goh,Robert Gallo,Jason Hom,Eric Strong,Yingjie Weng,Hannah Kerman,Joséphine A. Cool,Zahir Kanjee,Andrew S. Parsons,Neera Ahuja,Eric Horvitz,Daniel X. Yang,Arnold Milstein,Andrew Olson,Adam Rodman,Jonathan H. Chen
出处
期刊:JAMA network open [American Medical Association]
卷期号:7 (10): e2440969-e2440969 被引量:286
标识
DOI:10.1001/jamanetworkopen.2024.40969
摘要

Importance Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. Objective To assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources. Design, Setting, and Participants A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. Intervention Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. Main Outcomes and Measures The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. Results Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. Conclusions and Relevance In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. Trial Registration ClinicalTrials.gov Identifier: NCT06157944
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
伊坂发布了新的文献求助10
刚刚
甜橙完成签到 ,获得积分10
3秒前
3秒前
CipherSage应助孙靖博采纳,获得10
3秒前
爱笑的毛衣完成签到,获得积分10
3秒前
铭铭完成签到,获得积分10
5秒前
莫道桑榆完成签到,获得积分10
6秒前
ProfWang完成签到,获得积分10
6秒前
是我不得开心妍完成签到 ,获得积分10
7秒前
drlin完成签到,获得积分10
7秒前
可靠花生完成签到,获得积分10
7秒前
caiE完成签到,获得积分10
9秒前
mqthhh发布了新的文献求助10
10秒前
乖乖完成签到 ,获得积分10
12秒前
量子星尘发布了新的文献求助10
12秒前
英姑应助猫咪采纳,获得10
12秒前
威武的雨筠完成签到 ,获得积分10
13秒前
t6yur完成签到,获得积分10
13秒前
大个应助铭铭采纳,获得10
13秒前
Cari完成签到,获得积分10
13秒前
听风随影完成签到 ,获得积分10
14秒前
14秒前
15秒前
15秒前
ffl完成签到 ,获得积分10
20秒前
amwlsai完成签到,获得积分10
20秒前
22秒前
量子星尘发布了新的文献求助10
23秒前
23秒前
等待发卡关注了科研通微信公众号
23秒前
所所应助niu采纳,获得10
23秒前
24秒前
猪四郎完成签到 ,获得积分10
24秒前
25秒前
无花果应助t6yur采纳,获得10
25秒前
脑洞疼应助wjt采纳,获得10
27秒前
haiqin28发布了新的文献求助20
28秒前
28秒前
秋风发布了新的文献求助20
30秒前
猫咪发布了新的文献求助10
30秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Quaternary Science Reference Third edition 6000
Encyclopedia of Forensic and Legal Medicine Third Edition 5000
Aerospace Engineering Education During the First Century of Flight 3000
Agyptische Geschichte der 21.30. Dynastie 2000
Electron Energy Loss Spectroscopy 1500
Co-Use of Alcohol and Cannabis: How Are They Related? 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5799370
求助须知:如何正确求助?哪些是违规求助? 5799235
关于积分的说明 15499826
捐赠科研通 4925783
什么是DOI,文献DOI怎么找? 2651643
邀请新用户注册赠送积分活动 1598701
关于科研通互助平台的介绍 1553583