计算机科学
过度拟合
理论计算机科学
特征学习
人工智能
稠密图
图形
机器学习
人工神经网络
折线图
路宽
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:: 1-1
被引量:55
标识
DOI:10.1109/tkde.2021.3072345
摘要
Graph Neural Networks (GNNs) have proved to be an effective representation learning framework for graph-structured data, and have achieved state-of-the-art performance on many practical predictive tasks. Among the variants of GNNs, Graph Attention Networks (GATs) improve the performance of many graph learning tasks through a dense attention mechanism. However, real-world graphs are often very large and noisy, and GATs are prone to overfitting if not regularized properly. In this paper, we propose Sparse Graph Attention Networks (SGATs) that learn sparse attention coefficients under an L0-norm regularization, and the learned sparse attentions are then used for all GNN layers, resulting in an edge-sparsified graph. By doing so, we can identify noisy/task-irrelevant edges, and thus perform feature aggregation on most informative neighbors. Extensive experiments on synthetic and real-world (assortative and disassortative) graph learning benchmarks demonstrate the superior performance of SGATs. Furthermore, the removed edges can be interpreted intuitively and quantitatively. To the best of our knowledge, this is the first graph learning algorithm that shows significant redundancies in graphs and edge-sparsified graphs can achieve similar (on assortative graphs) or sometimes higher (on disassortative graphs) predictive performances than original graphs. Our code is available at https://github.com/Yangyeeee/SGAT.
科研通智能强力驱动
Strongly Powered by AbleSci AI