How AI Responds to Common Lung Cancer Questions: ChatGPT versus Google Bard

医学 一致性(知识库) 术语 肺癌 情报检索 医学物理学 病理 人工智能 计算机科学 语言学 哲学
作者
Amir Ali Rahsepar,Neda Tavakoli,Grace Hyun J. Kim,Cameron Hassani,Fereidoun Abtin,Arash Bedayat
出处
期刊:Radiology [Radiological Society of North America]
卷期号:307 (5) 被引量:83
标识
DOI:10.1148/radiol.230922
摘要

Background The recent release of large language models (LLMs) for public use, such as ChatGPT and Google Bard, has opened up a multitude of potential benefits as well as challenges. Purpose To evaluate and compare the accuracy and consistency of responses generated by publicly available ChatGPT-3.5 and Google Bard to non-expert questions related to lung cancer prevention, screening, and terminology commonly used in radiology reports based on the recommendation of Lung Imaging Reporting and Data System (Lung-RADS) v2022 from American College of Radiology and Fleischner society. Materials and Methods Forty of the exact same questions were created and presented to ChatGPT-3.5 and Google Bard experimental version as well as Bing and Google search engines by three different authors of this paper. Each answer was reviewed by two radiologists for accuracy. Responses were scored as correct, partially correct, incorrect, or unanswered. Consistency was also evaluated among the answers. Here, consistency was defined as the agreement between the three answers provided by ChatGPT-3.5, Google Bard experimental version, Bing, and Google search engines regardless of whether the concept conveyed was correct or incorrect. The accuracy among different tools were evaluated using Stata. Results ChatGPT-3.5 answered 120 questions with 85 (70.8%) correct, 14 (11.7%) partially correct, and 21 (17.5%) incorrect. Google Bard did not answer 23 (19.1%) questions. Among the 97 questions answered by Google Bard, 62 (51.7%) were correct, 11 (9.2%) were partially correct, and 24 (20%) were incorrect. Bing answered 120 questions with 74 (61.7%) correct, 13 (10.8%) partially correct, and 33 (27.5%) incorrect. Google search engine answered 120 questions with 66 (55%) correct, 27 (22.5%) partially correct, and 27 (22.5%) incorrect. The ChatGPT-3.5 is more likely to provide correct or partially answer than Google Bard, approximately by 1.5 folds (OR = 1.55, P = 0.004). ChatGPT-3.5 and Google search engine were more likely to be consistent than Google Bard by approximately 7 and 29 folds (OR = 6.65, P = 0.002 for ChatGPT and OR = 28.83, P = 0.002 for Google search engine, respectively). Conclusion Although ChatGPT-3.5 had a higher accuracy in comparison with the other tools, neither ChatGPT nor Google Bard, Bing and Google search engines answered all questions correctly and with 100% consistency.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
宋66完成签到,获得积分10
刚刚
xiax03发布了新的文献求助10
1秒前
Akim应助神勇的萝采纳,获得10
1秒前
共享精神应助啱啱采纳,获得10
1秒前
1秒前
TYL发布了新的文献求助10
1秒前
LLY发布了新的文献求助10
1秒前
吴军霄完成签到,获得积分10
2秒前
大头不秃头应助zz采纳,获得10
2秒前
liutaotao完成签到,获得积分10
3秒前
xlz完成签到,获得积分10
3秒前
幽默尔蓉完成签到,获得积分10
3秒前
3秒前
Yulin Yu发布了新的文献求助10
4秒前
吱吱发布了新的文献求助30
5秒前
坚强亦丝应助rr采纳,获得10
5秒前
共享精神应助曲夜白采纳,获得10
5秒前
88发布了新的文献求助10
5秒前
忘北曲完成签到,获得积分10
5秒前
5秒前
絮语完成签到,获得积分10
6秒前
共享精神应助高发采纳,获得10
7秒前
7秒前
ivy完成签到,获得积分10
7秒前
8秒前
8秒前
薰硝壤应助Vesper采纳,获得80
9秒前
蓝天白云发布了新的文献求助30
10秒前
SciGPT应助学术laji采纳,获得10
10秒前
10秒前
LLY完成签到,获得积分10
10秒前
10秒前
10秒前
11秒前
11秒前
ccerr完成签到,获得积分10
11秒前
薰硝壤应助LZY采纳,获得10
11秒前
情怀应助羞涩的梦山采纳,获得10
12秒前
絮语发布了新的文献求助10
13秒前
yangyang2021发布了新的文献求助10
13秒前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 600
A new species of Coccus (Homoptera: Coccoidea) from Malawi 500
A new species of Velataspis (Hemiptera Coccoidea Diaspididae) from tea in Assam 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3156450
求助须知:如何正确求助?哪些是违规求助? 2807921
关于积分的说明 7875266
捐赠科研通 2466226
什么是DOI,文献DOI怎么找? 1312727
科研通“疑难数据库(出版商)”最低求助积分说明 630255
版权声明 601919