计算机科学
平滑的
图形
嵌入
人工神经网络
可解释性
节点(物理)
理论计算机科学
人工智能
数据挖掘
机器学习
结构工程
工程类
计算机视觉
作者
Chuang Liu,Jia Wu,Weiwei Liu,Wenbin Hu
标识
DOI:10.1016/j.neunet.2021.04.025
摘要
Graph Neural Networks (GNNs), such as GCN, GraphSAGE, GAT, and SGC, have achieved state-of-the-art performance on a wide range of graph-based tasks. These models all use a technique called neighborhood aggregation, in which the embedding of each node is updated by aggregating the embeddings of its neighbors. However, not all information aggregated from neighbors is beneficial. In some cases, a portion of the neighbor information may be harmful to the downstream tasks. For the high-quality aggregation of beneficial information, we propose a flexible method EGAI (Enhancing Graph neural networks by a high-quality Aggregation of beneficial Information). The core concept of this method is to filter out the redundant and harmful information by removing specific edges during each training epoch. The practical and theoretical motivations, considerations, and strategies related to this method are discussed in detail. EGAI is a general method that can be combined with many backbone models (e.g., GCN, GraphSAGE, GAT, and SGC) to enhance their performance in the node classification task. In addition, EGAI reduces the convergence speed of over-smoothing that occurs when models are deepened. Extensive experiments on three real-world networks demonstrate that EGAI indeed improves the performance for both shallow and deep GNN models, and to some extent, mitigates over-smoothing. The code is available at https://github.com/liucoo/egai.
科研通智能强力驱动
Strongly Powered by AbleSci AI