分类学(生物学)
人工智能
计算机科学
领域(数学)
软件部署
问责
数据科学
管理科学
知识管理
政治学
工程类
软件工程
生物
植物
数学
纯数学
法学
作者
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-López,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:14
标识
DOI:10.48550/arxiv.1910.10045
摘要
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI