人工智能
计算机科学
随机森林
可信赖性
分类器(UML)
机器学习
黑匣子
数据科学
计算机安全
作者
Mrutyunjaya Panda,Soumya Ranjan Mahanta
出处
期刊:Cornell University - arXiv
日期:2023-11-09
标识
DOI:10.48550/arxiv.2311.05665
摘要
With the advances in computationally efficient artificial Intelligence (AI) techniques and their numerous applications in our everyday life, there is a pressing need to understand the computational details hidden in black box AI techniques such as most popular machine learning and deep learning techniques; through more detailed explanations. The origin of explainable AI (xAI) is coined from these challenges and recently gained more attention by the researchers by adding explainability comprehensively in traditional AI systems. This leads to develop an appropriate framework for successful applications of xAI in real life scenarios with respect to innovations, risk mitigation, ethical issues and logical values to the users. In this book chapter, an in-depth analysis of several xAI frameworks and methods including LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) are provided. Random Forest Classifier as black box AI is used on a publicly available Diabetes symptoms dataset with LIME and SHAP for better interpretations. The results obtained are interesting in terms of transparency, valid and trustworthiness in diabetes disease prediction.
科研通智能强力驱动
Strongly Powered by AbleSci AI