ChatGPT4 outperforms endoscopists for determination of post-colonoscopy re-screening and surveillance recommendations

医学 结肠镜检查 普通外科 医学物理学 内科学 结直肠癌 癌症
作者
Patrick Chang,Maziar M. Amini,Ruth Davis,Denis Nguyen,Jennifer L. Dodge,Helen Lee,Sarah Sheibani,Jennifer Phan,James Buxbaum,Ara Sahakian
出处
期刊:Clinical Gastroenterology and Hepatology [Elsevier]
标识
DOI:10.1016/j.cgh.2024.04.022
摘要

Background Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. Methods History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Results Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Conclusions Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use. Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
祁鹤应助Pluto.采纳,获得20
刚刚
1秒前
Linzy应助科研通管家采纳,获得20
1秒前
1秒前
不能没有科研完成签到,获得积分10
1秒前
郭翔完成签到,获得积分10
2秒前
3秒前
高大炮关注了科研通微信公众号
4秒前
FashionBoy应助科研通管家采纳,获得10
4秒前
贪玩的万仇完成签到 ,获得积分10
5秒前
6秒前
彳亍1117应助一一采纳,获得20
6秒前
7秒前
8秒前
操所有人完成签到,获得积分10
8秒前
kyan完成签到,获得积分10
9秒前
云游归尘发布了新的文献求助10
9秒前
幸福糖豆发布了新的文献求助10
10秒前
10秒前
Ustinian发布了新的文献求助10
10秒前
祁鹤应助不加盐采纳,获得10
11秒前
11秒前
北冥有鱼发布了新的文献求助10
11秒前
CP发布了新的文献求助10
14秒前
阿童木发布了新的文献求助30
14秒前
魔幻的向梦完成签到 ,获得积分10
14秒前
whatever举报呼呼求助涉嫌违规
14秒前
嘚嘚嘚发布了新的文献求助10
14秒前
15秒前
15秒前
15秒前
NSYM给NSYM的求助进行了留言
16秒前
16秒前
yuanyuan完成签到,获得积分20
16秒前
嚭嚭完成签到,获得积分10
17秒前
Ava应助10采纳,获得10
17秒前
wujingshuai发布了新的文献求助10
18秒前
高大炮发布了新的文献求助10
18秒前
隐形的尔烟完成签到,获得积分10
18秒前
可靠之玉应助欣喜绿蕊采纳,获得10
18秒前
高分求助中
Evolution 2024
Impact of Mitophagy-Related Genes on the Diagnosis and Development of Esophageal Squamous Cell Carcinoma via Single-Cell RNA-seq Analysis and Machine Learning Algorithms 2000
How to Create Beauty: De Lairesse on the Theory and Practice of Making Art 1000
Gerard de Lairesse : an artist between stage and studio 670
大平正芳: 「戦後保守」とは何か 550
Contributo alla conoscenza del bifenile e dei suoi derivati. Nota XV. Passaggio dal sistema bifenilico a quello fluorenico 500
Multiscale Thermo-Hydro-Mechanics of Frozen Soil: Numerical Frameworks and Constitutive Models 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 2996361
求助须知:如何正确求助?哪些是违规求助? 2656736
关于积分的说明 7190552
捐赠科研通 2292290
什么是DOI,文献DOI怎么找? 1215116
科研通“疑难数据库(出版商)”最低求助积分说明 593071
版权声明 592795