可解释性
人工智能
机器学习
计算机科学
领域(数学)
透视图(图形)
口译(哲学)
人工神经网络
深层神经网络
深度学习
选择(遗传算法)
核(代数)
数学
组合数学
程序设计语言
纯数学
作者
Wojciech Samek,Grégoire Montavon,Sebastian Lapuschkin,Christopher J. Anders,Klaus‐Robert Müller
出处
期刊:Cornell University - arXiv
日期:2020-03-17
被引量:20
摘要
With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for explainable AI. Interpretability and explanation methods for gaining a better understanding about the problem solving abilities and strategies of nonlinear Machine Learning such as Deep Learning (DL), LSTMs, and kernel methods are therefore receiving increased attention. In this work we aim to (1) provide a timely overview of this active emerging field and explain its theoretical foundations, (2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, (3) outline best practice aspects i.e. how to best include interpretation methods into the standard usage of machine learning and (4) demonstrate successful usage of explainable AI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of machine learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI