腰椎间盘突出症
医学
椎间盘突出
腰椎
物理疗法
外科
作者
Ankur Kayastha,Kirthika Lakshmanan,Michael J. Valentine,Anh Nguyen,K. Dholakia,Daniel Wang
标识
DOI:10.1016/j.xnsj.2024.100333
摘要
BackgroundChatGPT is an advanced language AI able to generate responses to clinical questions regarding lumbar disc herniation with radiculopathy. Artificial intelligence (AI) tools are increasingly being considered to assist clinicians in decision-making. This study compared ChatGPT-3.5 and ChatGPT-4.0 responses to established NASS clinical guidelines and evaluated concordance.MethodsChatGPT-3.5 and ChatGPT-4.0 were prompted with fifteen questions from The 2012 NASS Clinical Guidelines for the Diagnosis and Treatment of Lumbar Disc Herniation with Radiculopathy. Clinical questions organized into categories were directly entered as unmodified queries into ChatGPT. Language output was assessed by two independent authors on September 26, 2023 based on operationally-defined parameters of accuracy, over-conclusiveness, supplementary, and incompleteness. ChatGPT-3.5 and ChatGPT-4.0 performance was compared via chi-square analyses.ResultsAmong the fifteen responses produced by ChatGPT-3.5, seven (47%) were accurate, seven (47%) were over-conclusive, fifteen (100%) were supplementary, and six (40%) were incomplete. For ChatGPT-4.0, ten (67%) were accurate, five (33%) were over-conclusive, ten (67%) were supplementary, and six (40%) were incomplete. There was a statistically significant difference in supplementary information (100% vs. 67%; p=0.014) between ChatGPT-3.5 and ChatGPT-4.0. Accuracy (47% vs. 67%; p=0.269), over-conclusiveness (47% vs. 33%; p=0.456), and incompleteness (40% vs. 40%; p=1.000) did not show significant differences between ChatGPT-3.5 and ChatGPT-4.0. ChatGPT-3.5 and ChatGPT-4.0 both yielded 100% accuracy for definition and history and physical examination categories. Diagnostic testing yielded 0% accuracy for ChatGPT-3.5 and 100% accuracy for ChatGPT-4.0. Non-surgical interventions had 50% accuracy for ChatGPT-3.5 and 63% accuracy for ChatGPT-4.0. Surgical interventions resulted in 0% accuracy for ChatGPT-3.5 and 33% accuracy for ChatGPT-4.0.ConclusionsChatGPT-4.0 provided less supplementary information and overall higher accuracy in question categories than ChatGPT-3.5. ChatGPT showed reasonable concordance to NASS guidelines, but clinicians should caution use of ChatGPT in its current state as it fails to safeguard against misinformation.
科研通智能强力驱动
Strongly Powered by AbleSci AI