石灰
计算机科学
人工智能
分类器(UML)
感知
机器学习
简单(哲学)
算法
数据挖掘
生物
认识论
哲学
神经科学
冶金
材料科学
作者
Juan A. Recio-García,Belén Díaz‐Agudo,Victor Pino-Castilla
标识
DOI:10.1007/978-3-030-58342-2_12
摘要
Research on eXplainable AI has proposed several model agnostic algorithms, being LIME [] (Local Interpretable Model-Agnostic Explanations) one of the most popular. LIME works by modifying the query input locally, so instead of trying to explain the entire model, the specific input instance is modified, and the impact on the predictions are monitored and used as explanations. Although LIME is general and flexible, there are some scenarios where simple perturbations are not enough, so there are other approaches like Anchor where perturbations variation depends on the dataset. In this paper, we propose a CBR solution to the problem of configuring the parameters of the LIME algorithm for the explanation of an image classifier. The case base reflects the human perception of the quality of the explanations generated with different parameter configurations of LIME. Then, this parameter configuration is reused for similar input images.
科研通智能强力驱动
Strongly Powered by AbleSci AI