线性子空间
计算机科学
子空间拓扑
人工智能
代表(政治)
机器学习
相关性(法律)
不变(物理)
主成分分析
特征(语言学)
人工神经网络
模式识别(心理学)
数学
政治
哲学
语言学
数学物理
法学
政治学
几何学
作者
Pattarawat Chormai,Jan Herrmann,Klaus‐Robert Müller,Grégoire Montavon
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:3
标识
DOI:10.48550/arxiv.2212.14855
摘要
Explainable AI aims to overcome the black-box nature of complex ML models like neural networks by generating explanations for their predictions. Explanations often take the form of a heatmap identifying input features (e.g. pixels) that are relevant to the model's decision. These explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by extracting at some intermediate layer of a neural network, subspaces that capture the multiple and distinct activation patterns (e.g. visual concepts) that are relevant to the prediction. To automatically extract these subspaces, we propose two new analyses, extending principles found in PCA or ICA to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), maximize relevance instead of e.g. variance or kurtosis. This allows for a much stronger focus of the analysis on what the ML model actually uses for predicting, ignoring activations or concepts to which the model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
科研通智能强力驱动
Strongly Powered by AbleSci AI