Evaluating Artificial Intelligence-Driven Responses to Acute Liver Failure Queries: A Comparative Analysis Across Accuracy, Clarity, and Relevance

清晰 医学 相关性(法律) 软件部署 人工智能 肝衰竭 数据科学 外科 计算机科学 政治学 生物化学 操作系统 化学 法学
作者
Sheza Malik,Lewis J. Frey,Jason Gutman,Asim Mushtaq,Fatima Warraich,Kamran Qureshi
出处
期刊:The American Journal of Gastroenterology [American College of Gastroenterology]
卷期号:120 (9): 2081-2085 被引量:4
标识
DOI:10.14309/ajg.0000000000003255
摘要

INTRODUCTION: Recent advancements in artificial intelligence (AI), particularly through the deployment of large language models (LLMs), have profoundly impacted healthcare. This study assesses 5 LLMs—ChatGPT 3.5, ChatGPT 4, BARD, CLAUDE, and COPILOT—on their response accuracy, clarity, and relevance to queries concerning acute liver failure (ALF). We subsequently compare these results with ChatGPT4 enhanced with retrieval augmented generation (RAG) technology. METHODS: Based on real-world clinical use and the American College of Gastroenterology guidelines, we formulated 16 ALF questions or clinical scenarios to explore LLMs' ability to handle different clinical questions. Using the “New Chat” functionality, each query was processed individually across the models to reduce any bias. Additionally, we employed the RAG functionality of GPT-4, which integrates external sources as references to ground the results. All responses were evaluated on a Likert scale from 1 to 5 for accuracy, clarity, and relevance by 4 independent investigators to ensure impartiality. RESULTS: ChatGPT 4, augmented with RAG, demonstrated superior performance compared with others, consistently scoring the highest (4.70, 4.89, 4.78) across all 3 domains. ChatGPT 4 exhibited notable proficiency, with scores of 3.67 in accuracy, 4.04 in clarity, and 4.01 in relevance. In contrast, CLAUDE achieved 3.04 in clarity, 3.6 in relevance, and 3.65 in accuracy. Meanwhile, BARD and COPILOT exhibited lower performance levels; BARD recorded scores of 2.01 in accuracy and 3.03 in relevance, while COPILOT obtained 2.26 in accuracy and 3.12 in relevance. DISCUSSION: The study highlights Chat GPT 4 +RAG's superior performance compared with other LLMs. By integrating RAG with LLMs, the system combines generative language skills with accurate, up-to-date information. This improves response clarity, relevance, and accuracy, making them more effective in healthcare. However, AI models must continually evolve and align with medical practices for successful healthcare integration.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小宇宙完成签到,获得积分10
1秒前
好好学习完成签到 ,获得积分10
2秒前
3秒前
脱壳金蝉完成签到,获得积分10
5秒前
lmx完成签到,获得积分20
5秒前
清风完成签到 ,获得积分10
6秒前
ding应助霍焱采纳,获得10
8秒前
无情静柏完成签到 ,获得积分20
9秒前
12秒前
彭于晏应助风华采纳,获得10
13秒前
xmhxpz完成签到,获得积分10
13秒前
15秒前
Youngen发布了新的文献求助10
16秒前
18秒前
量子星尘发布了新的文献求助10
19秒前
online1881完成签到,获得积分10
19秒前
会飞的鱼完成签到,获得积分10
22秒前
小余同学完成签到 ,获得积分10
23秒前
吉涛发布了新的文献求助10
24秒前
田...完成签到,获得积分10
24秒前
阔达如柏完成签到,获得积分10
25秒前
wy完成签到,获得积分10
26秒前
Ammon完成签到,获得积分10
27秒前
明理小凝完成签到 ,获得积分10
27秒前
大苗完成签到,获得积分10
29秒前
曾经的凌青完成签到 ,获得积分10
30秒前
31秒前
体贴的手链完成签到,获得积分10
31秒前
31秒前
Youngen完成签到,获得积分10
32秒前
小樊爱摸鱼完成签到,获得积分10
32秒前
33秒前
33秒前
33秒前
33秒前
34秒前
34秒前
34秒前
wy应助科研通管家采纳,获得10
34秒前
wy应助科研通管家采纳,获得10
34秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Quaternary Science Reference Third edition 6000
Encyclopedia of Forensic and Legal Medicine Third Edition 5000
Introduction to strong mixing conditions volume 1-3 5000
Aerospace Engineering Education During the First Century of Flight 3000
Electron Energy Loss Spectroscopy 1500
Tip-in balloon grenadoplasty for uncrossable chronic total occlusions 1000
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5789530
求助须知:如何正确求助?哪些是违规求助? 5720862
关于积分的说明 15474819
捐赠科研通 4917334
什么是DOI,文献DOI怎么找? 2646933
邀请新用户注册赠送积分活动 1594542
关于科研通互助平台的介绍 1549081