Evaluation of the Current Status of Artificial Intelligence for Endourology Patient Education: A Blind Comparison of ChatGPT and Google Bard against Traditional Information Resources

可读性 医学 清晰 阅读(过程) 利克特量表 一致性 医学教育 内科学 心理学 计算机科学 发展心理学 生物化学 化学 政治学 法学 程序设计语言
作者
Christopher Connors,Kavita Gupta,Johnathan A. Khusid,Raymond Khargi,Alan Yaghoubian,Micah Levy,Blair Gallante,William Atallah,Mantu Gupta
出处
期刊:Journal of Endourology [Mary Ann Liebert]
被引量:2
标识
DOI:10.1089/end.2023.0696
摘要

Introduction Artificial intelligence (AI) platforms such as ChatGPT and Bard are increasingly utilized to answer patient healthcare questions. We present the first study to blindly evaluate AI-generated responses to common endourology patient questions against official patient education materials. Methods 32 questions and answers spanning kidney stones, ureteral stents, BPH, and UTUC were extracted from official Urology Care Foundation (UCF) patient education documents. The same questions were input into ChatGPT 4.0 and Bard, limiting responses to within  10% of the word count of the corresponding UCF response to ensure fair comparison. Six endourologists blindly evaluated responses from each platform using Likert scales for accuracy, clarity, comprehensiveness, and patient utility. Reviewers identified which response they believed was not AI-generated. Lastly, Flesch-Kincaid Reading Grade Level formulas assessed the readability of each platform response. Ratings were compared using ANOVA and Chi-Square tests. Results ChatGPT responses were rated the highest across all categories including accuracy, comprehensiveness, clarity, and patient utility while UCF answers were consistently scored the lowest, all p<0.01. Sub-analysis revealed that this trend was consistent across question categories (i.e., kidney stones, BPH, etc.). However, AI-generated responses were more likely to be classified at an advanced reading level while UCF responses showed improved readability (college or higher reading level: ChatGPT = 100%, Bard = 66%, UCF = 19%), p<0.001. When asked to identify which answer was not AI-generated, 54.2% of responses indicated ChatGPT, 26.6% indicated Bard, and only 19.3% correctly identified it as the UCF response. Conclusions In a blind evaluation, AI-generated responses from ChatGPT and Bard surpassed the quality of official patient education materials in endourology, suggesting that current AI platforms are already a reliable resource for basic urologic care information. AI-generated responses do, however, tend to require a higher reading level, which may limit their applicability to a broader audience.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
yyy发布了新的文献求助10
刚刚
刚刚
1秒前
1秒前
1秒前
1秒前
大模型应助不吃香菜采纳,获得10
2秒前
平淡水儿发布了新的文献求助10
2秒前
Ava应助lj采纳,获得10
2秒前
Xiyunyun发布了新的文献求助10
4秒前
一往而深发布了新的文献求助10
4秒前
王怜花发布了新的文献求助10
4秒前
5秒前
5秒前
7秒前
ll完成签到,获得积分10
7秒前
小郑完成签到,获得积分10
8秒前
坦率的松完成签到,获得积分10
8秒前
赘婿应助Veronica采纳,获得10
9秒前
10秒前
10秒前
橙子快跑发布了新的文献求助10
11秒前
韬气包发布了新的文献求助10
11秒前
enli完成签到,获得积分10
12秒前
热情飞荷完成签到,获得积分10
13秒前
桐桐应助zzznznnn采纳,获得10
13秒前
13秒前
14秒前
殷勤的汝燕完成签到,获得积分20
14秒前
14秒前
15秒前
agnes发布了新的文献求助10
15秒前
fun发布了新的文献求助10
15秒前
17秒前
崔家荣发布了新的文献求助30
17秒前
刘仁轨发布了新的文献求助10
18秒前
Hello应助浮生采纳,获得10
18秒前
科研通AI5应助悦耳静枫采纳,获得10
18秒前
19秒前
浅希忆辰应助duan采纳,获得10
19秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Les Mantodea de Guyane Insecta, Polyneoptera 1000
Conference Record, IAS Annual Meeting 1977 820
England and the Discovery of America, 1481-1620 600
Teaching language in context (Third edition) by Derewianka, Beverly; Jones, Pauline 550
Oligomycin, a new antifungal antibiotic 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3583447
求助须知:如何正确求助?哪些是违规求助? 3152668
关于积分的说明 9493793
捐赠科研通 2855215
什么是DOI,文献DOI怎么找? 1569480
邀请新用户注册赠送积分活动 735260
科研通“疑难数据库(出版商)”最低求助积分说明 721145