可读性
医学
颈椎前路椎间盘切除融合术
等级
阅读(过程)
质量(理念)
健康素养
医学物理学
外科
内科学
颈椎
医疗保健
心理学
计算机科学
哲学
数学教育
认识论
政治学
法学
经济
程序设计语言
经济增长
作者
Paul G. Mastrokostas,Leonidas E. Mastrokostas,Ahmed K. Emara,Ian J. Wellington,Elizabeth E. Ginalis,John K. Houten,Amrit S. Khalsa,Ahmed Saleh,Afshin E. Razi,Mitchell K. Ng
标识
DOI:10.1177/21925682241241241
摘要
Study Design Comparative study. Objectives This study aims to compare Google and GPT-4 in terms of (1) question types, (2) response readability, (3) source quality, and (4) numerical response accuracy for the top 10 most frequently asked questions (FAQs) about anterior cervical discectomy and fusion (ACDF). Methods “Anterior cervical discectomy and fusion” was searched on Google and GPT-4 on December 18, 2023. Top 10 FAQs were classified according to the Rothwell system. Source quality was evaluated using JAMA benchmark criteria and readability was assessed using Flesch Reading Ease and Flesch-Kincaid grade level. Differences in JAMA scores, Flesch-Kincaid grade level, Flesch Reading Ease, and word count between platforms were analyzed using Student’s t-tests. Statistical significance was set at the .05 level. Results Frequently asked questions from Google were varied, while GPT-4 focused on technical details and indications/management. GPT-4 showed a higher Flesch-Kincaid grade level (12.96 vs 9.28, P = .003), lower Flesch Reading Ease score (37.07 vs 54.85, P = .005), and higher JAMA scores for source quality (3.333 vs 1.800, P = .016). Numerically, 6 out of 10 responses varied between platforms, with GPT-4 providing broader recovery timelines for ACDF. Conclusions This study demonstrates GPT-4’s ability to elevate patient education by providing high-quality, diverse information tailored to those with advanced literacy levels. As AI technology evolves, refining these tools for accuracy and user-friendliness remains crucial, catering to patients’ varying literacy levels and information needs in spine surgery.
科研通智能强力驱动
Strongly Powered by AbleSci AI