可用性
临床决策支持系统
工作流程
医学诊断
人工智能
计算机科学
心理学
医学
医学物理学
机器学习
决策支持系统
病理
数据库
人机交互
作者
Arya Rao,Michael Pang,John Kim,Meghana Kamineni,Winston Lie,Anand Prasad,Adam Landman,Keith J. Dreyer,Marc D. Succi
标识
DOI:10.2196/preprints.48659
摘要
BACKGROUND Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE This study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS We inputted all 36 published clinical vignettes from the <i>Merck Sharpe & Dohme (MSD) Clinical Manual</i> into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. RESULTS ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; <i>P</i><.001) and clinical management (β=–7.4%; <i>P</i>=.02) question types. CONCLUSIONS ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.
科研通智能强力驱动
Strongly Powered by AbleSci AI