ChatGPT4 Outperforms Endoscopists for Determination of Postcolonoscopy Rescreening and Surveillance Recommendations

医学 结肠镜检查 一致性 麦克内马尔试验 指南 临床实习 置信区间 结直肠癌筛查 普通外科 家庭医学 医学物理学 胃肠病学 内科学 病理 结直肠癌 癌症 统计 数学
作者
Patrick Chang,Maziar M. Amini,Rio O. Davis,Denis Nguyen,Jennifer L. Dodge,Helen Lee,Sarah Sheibani,Jennifer Phan,James Buxbaum,Ara Sahakian
出处
期刊:Clinical Gastroenterology and Hepatology [Elsevier BV]
卷期号:22 (9): 1917-1925.e17 被引量:7
标识
DOI:10.1016/j.cgh.2024.04.022
摘要

Background Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. Methods History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Results Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Conclusions Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use. Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
2秒前
2秒前
Oliver完成签到,获得积分20
2秒前
不能说的秘密完成签到,获得积分10
2秒前
兴奋赛君完成签到,获得积分10
2秒前
vv发布了新的文献求助10
3秒前
111发布了新的文献求助10
3秒前
可盐够完成签到 ,获得积分20
5秒前
5秒前
万能图书馆应助赧赧采纳,获得10
5秒前
Jia完成签到 ,获得积分10
5秒前
卢曹宇完成签到,获得积分10
6秒前
栀初发布了新的文献求助10
6秒前
7秒前
8秒前
Xangel发布了新的文献求助10
8秒前
KaiPing完成签到,获得积分20
10秒前
10秒前
zheng发布了新的文献求助10
10秒前
积微完成签到,获得积分20
10秒前
轩辕寄风发布了新的文献求助10
10秒前
宁阿霜发布了新的文献求助10
11秒前
Zoe完成签到,获得积分20
11秒前
独特夜绿完成签到,获得积分10
11秒前
11秒前
春风嬉蝉完成签到,获得积分10
11秒前
依依完成签到 ,获得积分10
11秒前
zhz完成签到,获得积分10
12秒前
茉莉蜜茶发布了新的文献求助10
12秒前
13秒前
13秒前
JamesPei应助顺利毕业采纳,获得10
14秒前
KaiPing发布了新的文献求助50
15秒前
我是老大应助张志远采纳,获得10
15秒前
herococa应助echo采纳,获得10
15秒前
倪晨发布了新的文献求助10
16秒前
香蕉觅云应助万事胜意采纳,获得10
16秒前
17秒前
BMG发布了新的文献求助10
17秒前
高分求助中
The Mother of All Tableaux Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 2400
Ophthalmic Equipment Market by Devices(surgical: vitreorentinal,IOLs,OVDs,contact lens,RGP lens,backflush,diagnostic&monitoring:OCT,actorefractor,keratometer,tonometer,ophthalmoscpe,OVD), End User,Buying Criteria-Global Forecast to2029 2000
Optimal Transport: A Comprehensive Introduction to Modeling, Analysis, Simulation, Applications 800
Official Methods of Analysis of AOAC INTERNATIONAL 600
ACSM’s Guidelines for Exercise Testing and Prescription, 12th edition 588
T/CIET 1202-2025 可吸收再生氧化纤维素止血材料 500
Interpretation of Mass Spectra, Fourth Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 3951455
求助须知:如何正确求助?哪些是违规求助? 3496905
关于积分的说明 11085004
捐赠科研通 3227298
什么是DOI,文献DOI怎么找? 1784400
邀请新用户注册赠送积分活动 868422
科研通“疑难数据库(出版商)”最低求助积分说明 801122