计算机科学
透明度(行为)
Python(编程语言)
人工智能
一般化
可信赖性
机器学习
数据科学
程序设计语言
认识论
计算机安全
哲学
作者
Adrien Bennetot,Ivan Donadello,A. Haouari,Mauro Dragoni,Thomas Frossard,B.J. Wagner,Anna Sarranti,Silvia Tulli,Maria Trocan,Raja Chatila,Andreas Holzinger,Artur S. d’Avila Garcez,Natalia Díaz-Rodríguez
摘要
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.
科研通智能强力驱动
Strongly Powered by AbleSci AI