可读性
外行人
医学
利克特量表
理解力
梅德林
有用性
医学物理学
物理疗法
计算机科学
心理学
社会心理学
发展心理学
政治学
法学
程序设计语言
作者
Tejas Subramanian,Kasra Araghi,Troy B. Amen,Austin C. Kaidi,Branden R. Sosa,Pratyush Shahi,Sheeraz A. Qureshi,Sravisht Iyer
出处
期刊:Clinical spine surgery
[Ovid Technologies (Wolters Kluwer)]
日期:2024-03-21
卷期号:37 (6): E278-E281
被引量:1
标识
DOI:10.1097/bsd.0000000000001600
摘要
Study Design: Review of Chat Generative Pretraining Transformer (ChatGPT) outputs to select patient-focused questions. Objective: We aimed to examine the quality of ChatGPT responses to cervical spine questions. Background: Artificial intelligence and its utilization to improve patient experience across medicine is seeing remarkable growth. One such usage is patient education. For the first time on a large scale, patients can ask targeted questions and receive similarly targeted answers. Although patients may use these resources to assist in decision-making, there still exists little data regarding their accuracy, especially within orthopedic surgery and more specifically spine surgery. Methods: We compiled 9 frequently asked questions cervical spine surgeons receive in the clinic to test ChatGPT’s version 3.5 ability to answer a nuanced topic. Responses were reviewed by 2 independent reviewers on a Likert Scale for the accuracy of information presented (0–5 points), appropriateness in giving a specific answer (0–3 points), and readability for a layperson (0–2 points). Readability was assessed through the Flesh-Kincaid grade level analysis for the original prompt and for a second prompt asking for rephrasing at the sixth-grade reading level. Results: On average, ChatGPT’s responses scored a 7.1/10. Accuracy was rated on average a 4.1/5. Appropriateness was 1.8/3. Readability was a 1.2/2. Readability was determined to be at the 13.5 grade level originally and at the 11.2 grade level after prompting. Conclusions: ChatGPT has the capacity to be a powerful means for patients to gain important and specific information regarding their pathologies and surgical options. These responses are limited in their accuracy, and we, in addition, noted readability is not optimal for the average patient. Despite these limitations in ChatGPT’s capability to answer these nuanced questions, the technology is impressive, and surgeons should be aware patients will likely increasingly rely on it.
科研通智能强力驱动
Strongly Powered by AbleSci AI