ChatGPT4 Outperforms Endoscopists for Determination of Postcolonoscopy Rescreening and Surveillance Recommendations

医学 结肠镜检查 一致性 麦克内马尔试验 指南 临床实习 置信区间 结直肠癌筛查 普通外科 家庭医学 医学物理学 胃肠病学 内科学 病理 结直肠癌 癌症 统计 数学
作者
Patrick Chang,Maziar M. Amini,Rio O. Davis,Denis Nguyen,Jennifer L. Dodge,Helen Lee,Sarah Sheibani,Jennifer Phan,James Buxbaum,Ara Sahakian
出处
期刊:Clinical Gastroenterology and Hepatology [Elsevier]
卷期号:22 (9): 1917-1925.e17 被引量:2
标识
DOI:10.1016/j.cgh.2024.04.022
摘要

Background Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. Methods History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Results Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Conclusions Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use. Large language models (LLM) including ChatGPT4 improve access to artificial intelligence, but their impact on the clinical practice of gastroenterology is undefined. In this study, we aim to compare the accuracy, concordance and reliability of ChatGPT4 colonoscopy recommendations for colorectal cancer re-screening and surveillance to contemporary guidelines and real-world gastroenterology practice. History of present illness, colonoscopy data and pathology reports from patients undergoing procedures at two large academic centers were entered into ChatGPT4 and it was queried for next recommended colonoscopy follow-up interval. Using McNemar's test and inter-rater reliability, we compared the recommendations made by ChatGPT4 with the actual surveillance interval provided in the endoscopist's procedure report (gastroenterology practice) and the appropriate USMSTF guidance. The latter was generated for each case by an expert panel using the clinical information and guideline documents as reference. Text input of de-identified data into ChatGPT4 from 505 consecutive patients undergoing colonoscopy between January 1st and April 30th, 2023 elicited a successful follow-up recommendation in 99.2% of the queries. ChatGPT4 recommendations were in closer agreement with the USMSTF Panel (85.7%) than gastroenterology practice recommendations with the USMSTF Panel (75.4%) (P<.001). Of the 14.3% discordant recommendations between ChatGPT4 and USMSTF Panel, recommendations were for later screening in 26 (5.1%) and earlier screening in 44 (8.7%) cases. The inter-rater reliability was good for ChatGPT4 vs. USMSTF Panel (Fleiss κ: 0.786, CI95%: 0.734-0.838, P<.001). Initial real-world results suggest that ChatGPT4 can accurately define routine colonoscopy screening intervals based on verbatim input of clinical data. LLM have potential for clinical applications, but further training is needed for broad use.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Ye发布了新的文献求助10
刚刚
1秒前
mathy_chen关注了科研通微信公众号
1秒前
冷傲的道罡完成签到,获得积分10
1秒前
walker完成签到,获得积分20
2秒前
岁月间应助苏玖染采纳,获得10
2秒前
2秒前
Suyi完成签到,获得积分10
3秒前
赘婿应助Promise采纳,获得10
3秒前
1234发布了新的文献求助10
3秒前
3秒前
4秒前
4秒前
5秒前
田様应助绿泡泡采纳,获得10
5秒前
5秒前
yao完成签到,获得积分10
5秒前
Elio完成签到 ,获得积分10
6秒前
LIU完成签到,获得积分10
6秒前
薄荷喵完成签到,获得积分10
7秒前
泊頔完成签到,获得积分10
7秒前
成就的念双完成签到,获得积分10
7秒前
清脆大门发布了新的文献求助10
8秒前
tusizi2006完成签到,获得积分10
8秒前
8秒前
9秒前
兴奋皮卡丘完成签到,获得积分10
9秒前
YUYUYU完成签到,获得积分10
10秒前
cheng发布了新的文献求助10
10秒前
xxddw完成签到,获得积分10
10秒前
10秒前
11秒前
12345完成签到 ,获得积分20
11秒前
11秒前
所所应助sanmu采纳,获得10
11秒前
tusizi2006发布了新的文献求助10
11秒前
个性跳跳糖完成签到,获得积分10
11秒前
12秒前
journey完成签到 ,获得积分10
12秒前
NYM完成签到 ,获得积分10
13秒前
高分求助中
Lire en communiste 1000
Ore genesis in the Zambian Copperbelt with particular reference to the northern sector of the Chambishi basin 800
Becoming: An Introduction to Jung's Concept of Individuation 600
Communist propaganda: a fact book, 1957-1958 500
Briefe aus Shanghai 1946‒1952 (Dokumente eines Kulturschocks) 500
A new species of Coccus (Homoptera: Coccoidea) from Malawi 500
A new species of Velataspis (Hemiptera Coccoidea Diaspididae) from tea in Assam 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3167746
求助须知:如何正确求助?哪些是违规求助? 2819117
关于积分的说明 7925260
捐赠科研通 2479015
什么是DOI,文献DOI怎么找? 1320596
科研通“疑难数据库(出版商)”最低求助积分说明 632856
版权声明 602443