Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study (Preprint)

可用性 临床决策支持系统 工作流程 医学诊断 人工智能 计算机科学 心理学 医学 医学物理学 机器学习 决策支持系统 病理 数据库 人机交互
作者
Arya Rao,Michael Pang,John Kim,Meghana Kamineni,Winston Lie,Anand Prasad,Adam Landman,Keith J. Dreyer,Marc D. Succi
标识
DOI:10.2196/preprints.48659
摘要

BACKGROUND Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE This study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS We inputted all 36 published clinical vignettes from the <i>Merck Sharpe &amp; Dohme (MSD) Clinical Manual</i> into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. RESULTS ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; <i>P</i>&lt;.001) and clinical management (β=–7.4%; <i>P</i>=.02) question types. CONCLUSIONS ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
hqq完成签到,获得积分10
刚刚
刚刚
F_echo发布了新的文献求助30
刚刚
1秒前
xdz发布了新的文献求助10
1秒前
英俊的铭应助mumu采纳,获得10
2秒前
不与旋覆应助dckiop采纳,获得10
2秒前
我是老大应助树下友人采纳,获得10
2秒前
2秒前
好运来完成签到 ,获得积分10
3秒前
3秒前
Cissy发布了新的文献求助10
3秒前
科研通AI6.1应助南城花开采纳,获得10
3秒前
JamesPei应助ngoc777采纳,获得10
3秒前
4秒前
4秒前
4秒前
5秒前
初雪应助yuan采纳,获得10
7秒前
量子星尘发布了新的文献求助10
8秒前
8秒前
8秒前
8秒前
9秒前
9秒前
9秒前
油米盐发布了新的文献求助10
9秒前
9秒前
9秒前
充电宝应助亚弥亚米饭采纳,获得30
10秒前
11秒前
11秒前
浮浮世世发布了新的文献求助10
11秒前
大模型应助霸气侧漏采纳,获得10
11秒前
情怀应助AI imaging采纳,获得30
12秒前
毕业关注了科研通微信公众号
12秒前
希翼发布了新的文献求助10
13秒前
希望天下0贩的0应助蹄子采纳,获得10
14秒前
小王小王发布了新的文献求助10
14秒前
Orange应助工大搬砖战神采纳,获得10
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
No Good Deed Goes Unpunished 1100
Bioseparations Science and Engineering Third Edition 1000
Lloyd's Register of Shipping's Approach to the Control of Incidents of Brittle Fracture in Ship Structures 1000
BRITTLE FRACTURE IN WELDED SHIPS 1000
Entre Praga y Madrid: los contactos checoslovaco-españoles (1948-1977) 1000
Polymorphism and polytypism in crystals 1000
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 物理 生物化学 化学工程 计算机科学 复合材料 内科学 催化作用 光电子学 物理化学 电极 冶金 遗传学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 6100081
求助须知:如何正确求助?哪些是违规求助? 7929785
关于积分的说明 16424600
捐赠科研通 5229821
什么是DOI,文献DOI怎么找? 2794979
邀请新用户注册赠送积分活动 1777336
关于科研通互助平台的介绍 1651103