计算机科学
夏普里值
消息传递
人工智能
理论计算机科学
图形
机器学习
人工神经网络
光学(聚焦)
博弈论
分布式计算
数学
物理
光学
数理经济学
作者
Shurui Gui,Hao Yuan,Jie Wang,Qicheng Lao,Kang Li,Shuiwang Ji
标识
DOI:10.1109/tpami.2023.3347470
摘要
We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms. While most current methods focus on explaining graph nodes, edges, or features, we argue that, as the inherent functional mechanism of GNNs, message flows are more natural for performing explainability. To this end, we propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows. To quantify the importance of flows, we propose to follow the philosophy of Shapley values from cooperative game theory. To tackle the complexity of computing all coalitions' marginal contributions, we propose a flow sampling scheme to compute Shapley value approximations as initial assessments of further training. We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets: necessary or sufficient explanations. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed FlowX and its variants lead to improved explainability of GNNs.
科研通智能强力驱动
Strongly Powered by AbleSci AI