Evaluation of ChatGPT and Google Bard Using Prompt Engineering in Cancer Screening Algorithms

计算机科学 临床决策 变量(数学) 放射科 医学 家庭医学 数学 数学分析
作者
Daniel Nguyen,Daniel R. Swanson,Alex Newbury,Young H. Kim
出处
期刊:Academic Radiology [Elsevier]
卷期号:31 (5): 1799-1804 被引量:8
标识
DOI:10.1016/j.acra.2023.11.002
摘要

Large language models (LLMs) such as ChatGPT and Bard have emerged as powerful tools in medicine, showcasing strong results in tasks such as radiology report translations and research paper drafting. While their implementation in clinical practice holds promise, their response accuracy remains variable. This study aimed to evaluate the accuracy of ChatGPT and Bard in clinical decision-making based on the American College of Radiology Appropriateness Criteria for various cancers. Both LLMs were evaluated in terms of their responses to open-ended (OE) and select-all-that-apply (SATA) prompts. Furthermore, the study incorporated prompt engineering (PE) techniques to enhance the accuracy of LLM outputs. The results revealed similar performances between ChatGPT and Bard on OE prompts, with ChatGPT exhibiting marginally higher accuracy in SATA scenarios. The introduction of PE also marginally improved LLM outputs in OE prompts but did not enhance SATA responses. The results highlight the potential of LLMs in aiding clinical decision-making processes, especially when guided by optimally engineered prompts. Future studies in diverse clinical situations are imperative to better understand the impact of LLMs in radiology. Large language models (LLMs) such as ChatGPT and Bard have emerged as powerful tools in medicine, showcasing strong results in tasks such as radiology report translations and research paper drafting. While their implementation in clinical practice holds promise, their response accuracy remains variable. This study aimed to evaluate the accuracy of ChatGPT and Bard in clinical decision-making based on the American College of Radiology Appropriateness Criteria for various cancers. Both LLMs were evaluated in terms of their responses to open-ended (OE) and select-all-that-apply (SATA) prompts. Furthermore, the study incorporated prompt engineering (PE) techniques to enhance the accuracy of LLM outputs. The results revealed similar performances between ChatGPT and Bard on OE prompts, with ChatGPT exhibiting marginally higher accuracy in SATA scenarios. The introduction of PE also marginally improved LLM outputs in OE prompts but did not enhance SATA responses. The results highlight the potential of LLMs in aiding clinical decision-making processes, especially when guided by optimally engineered prompts. Future studies in diverse clinical situations are imperative to better understand the impact of LLMs in radiology.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
灵巧的雨莲完成签到,获得积分10
刚刚
starofjlu应助haku采纳,获得20
1秒前
汉堡包应助舒心远侵采纳,获得10
1秒前
1秒前
Owen应助Docline采纳,获得10
2秒前
3秒前
4秒前
4秒前
4秒前
忧心的雯发布了新的文献求助10
5秒前
6秒前
lankeren完成签到 ,获得积分10
9秒前
9秒前
10秒前
xiaoze完成签到,获得积分10
10秒前
10秒前
oliv完成签到 ,获得积分10
11秒前
神明发布了新的文献求助10
11秒前
13完成签到 ,获得积分10
11秒前
今后应助渤大彭于晏采纳,获得10
11秒前
所所应助s0x0y0采纳,获得10
12秒前
隐形曼青应助叙温雨采纳,获得10
13秒前
斯文败类应助Muxi采纳,获得10
13秒前
14秒前
爆米花应助神明采纳,获得10
16秒前
18秒前
Hello应助微弱de胖头采纳,获得30
18秒前
善学以致用应助zljgy2000采纳,获得10
19秒前
田田发布了新的文献求助30
20秒前
s0x0y0完成签到,获得积分10
20秒前
21秒前
22秒前
23秒前
23秒前
谦让文龙完成签到,获得积分20
24秒前
24秒前
24秒前
26秒前
夜雨发布了新的文献求助20
27秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
юрские динозавры восточного забайкалья 800
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
Chen Hansheng: China’s Last Romantic Revolutionary 500
China's Relations With Japan 1945-83: The Role of Liao Chengzhi 400
Classics in Total Synthesis IV 400
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3149387
求助须知:如何正确求助?哪些是违规求助? 2800406
关于积分的说明 7840028
捐赠科研通 2458019
什么是DOI,文献DOI怎么找? 1308162
科研通“疑难数据库(出版商)”最低求助积分说明 628456
版权声明 601706