可解释性
计算机科学
人工智能
医学诊断
机器学习
领域(数学分析)
特征(语言学)
功能(生物学)
黑匣子
医疗保健
数据科学
医学
数学
数学分析
语言学
哲学
病理
进化生物学
经济
生物
经济增长
作者
Subhan Ali,Filza Akhlaq,Ali Shariq Imran,Zenun Kastrati,Sher Muhammad Daudpota,Muhammad Moosa
标识
DOI:10.1016/j.compbiomed.2023.107555
摘要
In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI