计算机科学
消息传递
图形
理论计算机科学
人工神经网络
代表(政治)
人工智能
分布式计算
政治学
政治
法学
作者
Xiaolong Fan,Maoguo Gong,Yue Wu,A. K. Qin,Yu Xie
标识
DOI:10.1109/tkde.2021.3102964
摘要
Graph Neural Network (GNN) is capable of applying deep neural networks to graph domains. Recently, Message Passing Neural Networks (MPNNs) have been proposed to generalize several existing graph neural networks into a unified framework. For graph representation learning, MPNNs first generate discriminative node representations using the message passing function and then read from the node representation space to generate a graph representation using the readout function. In this paper, we analyze the representation capacity of the MPNNs for aggregating graph information and observe that the existing approaches ignore the self-loop for graph representation learning, leading to limited representation capacity. To alleviate this issue, we introduce a simple yet effective propagation enhanced extension, Self-Connected Neural Message Passing (SC-NMP), which aggregates the node representations of the current step and the graph representation of the previous step. To further improve the information flow, we also propose a Densely Self-Connected Neural Message Passing (DSC-NMP) that connects each layer to every other layer in a feed-forward fashion. Both proposed architectures are applied at each layer and the graph representation can then be used as input into all subsequent layers. Remarkably, combining these two architectures with existing GNN variants can improve these models' performance for graph representation learning. Extensive experiments on various benchmark datasets strongly demonstrate the effectiveness, leading to superior performance for graph classification and regression tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI