ChatGPT4 Outperforms Endoscopists for Determination of Postcolonoscopy Rescreening and Surveillance Recommendations

医学 结肠镜检查 一致性 麦克内马尔试验 指南 临床实习 置信区间 结直肠癌筛查 普通外科 家庭医学 医学物理学 胃肠病学 内科学 病理 结直肠癌 癌症 统计 数学
作者
Patrick Chang,Maziar M. Amini,Rio O. Davis,Denis Nguyen,Jennifer L. Dodge,Helen Lee,Sarah Sheibani,Jennifer Phan,James Buxbaum,Ara Sahakian
出处
期刊:Clinical Gastroenterology and Hepatology [Elsevier]
卷期号:22 (9): 1917-1925.e17 被引量:5
标识
DOI:10.1016/j.cgh.2024.04.022
摘要

Background Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. Methods History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Results Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Conclusions Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use. Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Ehgnix完成签到,获得积分10
刚刚
嘴嘴是大嘴007完成签到,获得积分10
1秒前
1秒前
但愿完成签到 ,获得积分10
1秒前
犹豫的一斩应助Pangsj采纳,获得10
2秒前
Jenny应助wjs0406采纳,获得10
2秒前
2秒前
酒九发布了新的文献求助10
2秒前
落晨发布了新的文献求助10
3秒前
包容可乐完成签到,获得积分10
3秒前
4秒前
眼睛大的一曲完成签到,获得积分10
4秒前
5秒前
英俊的铭应助wu采纳,获得10
5秒前
认真的飞扬完成签到,获得积分10
5秒前
5秒前
雪白的西牛完成签到,获得积分20
5秒前
芋头完成签到,获得积分10
5秒前
ntxiaohu完成签到,获得积分10
6秒前
四火完成签到,获得积分10
6秒前
6秒前
一裤子灰完成签到,获得积分10
6秒前
SamuelLiu完成签到,获得积分10
6秒前
6秒前
6秒前
7秒前
8R60d8应助松子采纳,获得10
7秒前
7秒前
我来回收数据完成签到,获得积分10
8秒前
欣忆完成签到 ,获得积分10
8秒前
复原乳完成签到,获得积分10
8秒前
9秒前
四火发布了新的文献求助10
9秒前
Rui发布了新的文献求助10
9秒前
白宝宝北北白应助dfggg采纳,获得10
10秒前
阳光海云发布了新的文献求助50
10秒前
小胖鱼关注了科研通微信公众号
10秒前
昏睡的眼神完成签到 ,获得积分10
10秒前
NexusExplorer应助南乔采纳,获得10
10秒前
杜嘟嘟发布了新的文献求助10
10秒前
高分求助中
Continuum Thermodynamics and Material Modelling 3000
Production Logging: Theoretical and Interpretive Elements 2700
Social media impact on athlete mental health: #RealityCheck 1020
Ensartinib (Ensacove) for Non-Small Cell Lung Cancer 1000
Unseen Mendieta: The Unpublished Works of Ana Mendieta 1000
Bacterial collagenases and their clinical applications 800
El viaje de una vida: Memorias de María Lecea 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 基因 遗传学 物理化学 催化作用 量子力学 光电子学 冶金
热门帖子
关注 科研通微信公众号,转发送积分 3527521
求助须知:如何正确求助?哪些是违规求助? 3107606
关于积分的说明 9286171
捐赠科研通 2805329
什么是DOI,文献DOI怎么找? 1539901
邀请新用户注册赠送积分活动 716827
科研通“疑难数据库(出版商)”最低求助积分说明 709740