领域(数学)
深度学习
背景(考古学)
人工智能
计算机科学
深层神经网络
空格(标点符号)
立法
数据科学
点(几何)
执行
认知科学
管理科学
机器学习
工程类
心理学
政治学
数学
法学
地理
几何学
纯数学
考古
操作系统
作者
Gabriëlle Ras,Ning Xie,Marcel van Gerven,Derek Doran
摘要
Deep neural networks (DNNs) are an indispensable machine learning tool despite the difficulty of diagnosing what aspects of a model’s input drive its decisions. In countless real-world domains, from legislation and law enforcement to healthcare, such diagnosis is essential to ensure that DNN decisions are driven by aspects appropriate in the context of its use. The development of methods and studies enabling the explanation of a DNN’s decisions has thus blossomed into an active and broad area of research. The field’s complexity is exacerbated by competing definitions of what it means “to explain” the actions of a DNN and to evaluate an approach’s “ability to explain”. This article offers a field guide to explore the space of explainable deep learning for those in the AI/ML field who are uninitiated. The field guide: i) Introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning, ii) discusses the evaluations for model explanations, iii) places explainability in the context of other related deep learning research areas, and iv) discusses user-oriented explanation design and future directions. We hope the guide is seen as a starting point for those embarking on this research field.
科研通智能强力驱动
Strongly Powered by AbleSci AI