荟萃分析
生成语法
可靠性(半导体)
诊断准确性
系统回顾
人工智能
诊断试验
计算机科学
医学物理学
梅德林
机器学习
医学
病理
内科学
儿科
政治学
法学
功率(物理)
物理
量子力学
作者
Hirotaka Takita,Daijiro Kabata,Shannon L. Walston,Hiroyuki Tatekawa,Kenichi Saito,Yasushi Tsujimoto,Yukio Miki,Daiju Ueda
标识
DOI:10.1038/s41746-025-01543-z
摘要
Abstract While generative artificial intelligence (AI) has shown potential in medical diagnostics, comprehensive evaluation of its diagnostic performance and comparison with physicians has not been extensively explored. We conducted a systematic review and meta-analysis of studies validating generative AI models for diagnostic tasks published between June 2018 and June 2024. Analysis of 83 studies revealed an overall diagnostic accuracy of 52.1%. No significant performance difference was found between AI models and physicians overall ( p = 0.10) or non-expert physicians ( p = 0.93). However, AI models performed significantly worse than expert physicians ( p = 0.007). Several models demonstrated slightly higher performance compared to non-experts, although the differences were not significant. Generative AI demonstrates promising diagnostic capabilities with accuracy varying by model. Although it has not yet achieved expert-level reliability, these findings suggest potential for enhancing healthcare delivery and medical education when implemented with appropriate understanding of its limitations.
科研通智能强力驱动
Strongly Powered by AbleSci AI