可解释性
计算机科学
人工智能
机器学习
图形
人工神经网络
仿形(计算机编程)
背景(考古学)
数据挖掘
理论计算机科学
生物
操作系统
古生物学
作者
Ehsan Bonabi Mobaraki,Arijit Khan
标识
DOI:10.1145/3594778.3594880
摘要
Graph neural networks (GNNs) are widely used in many downstream applications, such as graphs and nodes classification, entity resolution, link prediction, and question answering. Several interpretability methods for GNNs have been proposed recently. However, since they have not been thoroughly compared with each other, their trade-offs and efficiency in the context of underlying GNNs and downstream applications are unclear. To support more research in this domain, we develop an end-to-end interactive tool, named gInterpreter, by re-implementing 15 recent GNN interpretability methods in a common environment on top of a number of state-of-the-art GNNs employed for different downstream tasks. This paper demonstrates gInterpreter with an interactive performance profiling of 15 recent GNN inter-pretability methods, aiming to explain the complex deep learning pipelines over graph-structured data.
科研通智能强力驱动
Strongly Powered by AbleSci AI