反事实思维
计算机科学
路径(计算)
人工智能
心理学
计算机网络
社会心理学
作者
Yicong Li,Xiangguo Sun,Hongxu Chen,Sixiao Zhang,Yu Yang,Guandong Xu
标识
DOI:10.1109/tkde.2024.3373608
摘要
Compared with only pursuing recommendation accuracy, the explainability of a recommendation model has drawn more attention in recent years. Many graph-based recommendations resort to informative paths with the attention mechanism for the explanation. Unfortunately, these attention weights are intentionally designed for model accuracy but not explainability. Recently, some researchers have started to question attention-based explainability because the attention weights are unstable for different reproductions, and they may not always align with human intuition. Inspired by the counterfactual reasoning from causality learning theory, we propose a novel explainable framework targeting path-based recommendations, wherein the explainable weights of paths are learned to replace attention weights. Specifically, we design two counterfactual reasoning algorithms from both path representation and path topological structure perspectives. Moreover, unlike traditional case studies, we also propose a package of explainability evaluation solutions with both qualitative and quantitative methods. We conduct extensive experiments on four real-world datasets, the results of which further demonstrate the effectiveness and reliability of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI