代码段
计算机科学
数据科学
人工智能
情报检索
机器学习
作者
Yuqing Yang,Boris Joukovsky,José Oramas,Tinne Tuytelaars,Nikos Deligiannis
摘要
Explainable Artificial Intelligence (XAI) attempts to help humans understand machine learning decisions better and has been identified as a critical component towards increasing the trustworthiness of complex black-box systems, such as deep neural networks (DNNs). In this paper, we propose a generic and comprehensive framework named SNIPPET and create a user interface for the subjective evaluation of visual explanations, focusing on finding human-friendly explanations. SNIPPET considers human-centered evaluation tasks and incorporates the collection of human annotations. These annotations can serve as valuable feedback to validate the qualitative results obtained from the subjective assessment tasks. Moreover, we consider different user background categories during the evaluation process to ensure diverse perspectives and comprehensive evaluation. We demonstrate SNIPPET on a DeepFake face dataset. Distinguishing real from fake faces is a non-trivial task even for humans, that depends on rather subtle features, making it a challenging use case. Using SNIPPET, we evaluate four popular XAI methods which provide visual explanations: Gradient-weighted Class Activation Mapping (GradCAM), Layer-wise Relevance Propagation (LRP), attention rollout (rollout), and Transformer Attribution (TA). Based on our experimental results, we observe preference variations among different user categories. We find that most people are more favorable to the explanations of rollout. Moreover, when it comes to XAI-assisted understanding, those who have no or lack relevant background knowledge often consider that visual explanations are insufficient to help them understand. We open-source our framework for continued data collection and annotation at https://github.com/XAI-SubjEvaluation/SNIPPET.
科研通智能强力驱动
Strongly Powered by AbleSci AI