一致性(知识库)
半径
地质学
计算机科学
人工智能
计算机安全
作者
Christopha J Knee,Ryan Campbell,David J. Graham,Cameron Handford,Michael Symes,Brahman Sivakumar
摘要
Abstract Background The optimal management of distal radius fractures remains a challenge for orthopaedic surgeons. The emergence of Artificial Intelligence (AI) and Large Language Models (LLMs), especially ChatGPT, affords significant potential in improving healthcare and research. This study aims to assess the accuracy and consistency of ChatGPT's knowledge in managing distal radius fractures, with a focus on its capability to provide information for patients and assist in the decision‐making processes of orthopaedic clinicians. Methods We presented ChatGPT with seven questions on distal radius fracture management over two sessions, resulting in 14 responses. These questions covered a range of topics, including patient inquiries and orthopaedic clinical decision‐making. We requested references for each response and involved two orthopaedic registrars and two senior orthopaedic surgeons to evaluate response accuracy and consistency. Results All 14 responses contained a mix of both correct and incorrect information. Among the 47 cited references, 13% were accurate, 28% appeared to be fabricated, 57% were incorrect, and 2% were correct but deemed inappropriate. Consistency was observed in 71% of the responses. Conclusion ChatGPT demonstrates significant limitations in accuracy and consistency when providing information on distal radius fractures. In its current format, it offers limited utility for patient education and clinical decision‐making.
科研通智能强力驱动
Strongly Powered by AbleSci AI