Evaluating ChatGPT's effectiveness and tendencies in Japanese internal medicine

镜像 差异(会计) 集合(抽象数据类型) 过程(计算) 统计 置信区间 医学 人工智能 梅德林 计算机科学 心理学 内科学 机器学习 临床试验 数据挖掘 临床实习 统计显著性 方差分量 诊断准确性 数学 统计分析 医学教育 替代医学 循证医学 方差分析
作者
Yudai Kaneda,Akari Tayuinosho,Rika Tomoyose,Morihito Takita,Tamae Hamaki,Tetsuya Tanimoto,Akihiko Ozaki
出处
期刊:Journal of Evaluation in Clinical Practice [Wiley]
卷期号:30 (6): 1017-1023 被引量:1
标识
DOI:10.1111/jep.14011
摘要

Abstract Introduction ChatGPT, a large‐scale language model, is a notable example of AI's potential in health care. However, its effectiveness in clinical settings, especially when compared to human physicians, is not fully understood. This study evaluates ChatGPT's capabilities and limitations in answering questions for Japanese internal medicine specialists, aiming to clarify its accuracy and tendencies in both correct and incorrect responses. Methods We utilized ChatGPT's answers on four sets of self‐training questions for internal medicine specialists in Japan from 2020 to 2023. We ran three trials for each set to evaluate its overall accuracy and performance on nonimage questions. Subsequently, we categorized the questions into two groups: those ChatGPT consistently answered correctly (Confirmed Correct Answer, CCA) and those it consistently answered incorrectly (Confirmed Incorrect Answer, CIA). For these groups, we calculated the average accuracy rates and 95% confidence intervals based on the actual performance of internal medicine physicians on each question and analyzed the statistical significance between the two groups. This process was then similarly applied to the subset of nonimage CCA and CIA questions. Results ChatGPT's overall accuracy rate was 59.05%, increasing to 65.76% for nonimage questions. 24.87% of the questions had answers that varied between correct and incorrect in the three trials. Despite surpassing the passing threshold for nonimage questions, ChatGPT's accuracy was lower than that of human specialists. There was a significant variance in accuracy between CCA and CIA groups, with ChatGPT mirroring human physician patterns in responding to different question types. Conclusion This study underscores ChatGPT's potential utility and limitations in internal medicine. While effective in some aspects, its dependence on question type and context suggests that it should supplement, not replace, professional medical judgment. Further research is needed to integrate Artificial Intelligence tools like ChatGPT more effectively into specialized medical practices.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
刚刚
美丽的涵菡完成签到 ,获得积分10
1秒前
Chnious发布了新的文献求助10
1秒前
Radicaldev完成签到,获得积分10
2秒前
2秒前
努力奔跑完成签到,获得积分10
2秒前
3秒前
BaoCure发布了新的文献求助10
3秒前
完美世界应助Waley采纳,获得10
3秒前
4秒前
我是老大应助CHY采纳,获得10
4秒前
4秒前
4秒前
4秒前
Slemon发布了新的文献求助10
4秒前
彳亍发布了新的文献求助10
4秒前
努力奔跑发布了新的文献求助10
5秒前
7秒前
7秒前
7秒前
7秒前
林霖完成签到,获得积分10
7秒前
所所应助aurora采纳,获得10
8秒前
知北完成签到,获得积分10
8秒前
软糖完成签到 ,获得积分10
9秒前
Lxc254完成签到,获得积分10
11秒前
12秒前
12秒前
13秒前
Akim应助叶涛采纳,获得10
13秒前
Hello应助zhiren采纳,获得10
14秒前
叶渊舟完成签到,获得积分10
15秒前
热心的嫣发布了新的文献求助10
15秒前
泠漓完成签到 ,获得积分10
15秒前
ddd完成签到,获得积分10
17秒前
饱满的棒棒糖完成签到 ,获得积分10
18秒前
19秒前
19秒前
22秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Modern Epidemiology, Fourth Edition 5000
Handbook of pharmaceutical excipients, Ninth edition 5000
Digital Twins of Advanced Materials Processing 2000
Weaponeering, Fourth Edition – Two Volume SET 2000
Polymorphism and polytypism in crystals 1000
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 纳米技术 有机化学 生物化学 化学工程 物理 计算机科学 复合材料 内科学 催化作用 物理化学 光电子学 电极 冶金 基因 遗传学
热门帖子
关注 科研通微信公众号,转发送积分 6023016
求助须知:如何正确求助?哪些是违规求助? 7645959
关于积分的说明 16171105
捐赠科研通 5171318
什么是DOI,文献DOI怎么找? 2767068
邀请新用户注册赠送积分活动 1750461
关于科研通互助平台的介绍 1637029