医学
利克特量表
腕管综合征
骨科手术
腱鞘炎
医学诊断
手外科
物理疗法
描述性统计
医学物理学
外科
医学教育
病理
统计
数学
作者
Olivia Jagiella-Lodise,Nina Suh,Nicole A. Zelenski
出处
期刊:Hand
[SAGE]
日期:2024-04-23
被引量:3
标识
DOI:10.1177/15589447241247246
摘要
Background: In recent years, ChatGPT has become a popular source of information online. Physicians need to be aware of the resources their patients are using to self-inform of their conditions. This study investigates physician-graded accuracy and completeness of ChatGPT regarding various questions patients are likely to ask the artificial intelligence (AI) system concerning common upper limb orthopedic conditions. Methods: ChatGPT 3.5 was interrogated concerning 5 common orthopedic hand conditions: carpal tunnel syndrome, Dupuytren contracture, De Quervain tenosynovitis, trigger finger, and carpal metacarpal arthritis. Questions evaluated conditions’ symptoms, pathology, management, surgical indications, recovery time, insurance coverage, and workers’ compensation possibility. Each topic had 12 to 15 questions and was established as its own ChatGPT conversation. All questions regarding the same diagnosis were presented to the AI, and its answers were recorded. Each question was then graded for both accuracy (Likert scale of 1-6) and completeness (Likert scale of 1-3) by 10 fellowship trained hand surgeons. Descriptive statistics were performed. Results: Overall, the mean accuracy score for ChatGPT’s answers to common orthopedic hand diagnoses was 4.83 out of 6 ± 0.95. The mean completeness of answers was 2 out of 3 ± 0.59. Conclusions: Easily accessible online AI such as ChatGPT is becoming more advanced and thus more reliable in its ability to answer common medical questions. Physicians can anticipate such online resources being mostly correct, however incomplete. Patients should beware of relying on such resources in isolation.
科研通智能强力驱动
Strongly Powered by AbleSci AI