可解释性
黑匣子
计算机科学
透视图(图形)
数据科学
人工智能
机器学习
管理科学
经济
作者
Riccardo Guidotti,Anna Monreale,Salvatore Ruggieri,Franco Turini,Fosca Giannotti,Dino Pedreschi
出处
期刊:ACM Computing Surveys
[Association for Computing Machinery]
日期:2018-08-22
卷期号:51 (5): 1-42
被引量:3090
摘要
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
科研通智能强力驱动
Strongly Powered by AbleSci AI