计算机科学
虚假关系
图形
判别式
人工智能
算法
理论计算机科学
模式识别(心理学)
机器学习
作者
Haoyang Li,Xin Wang,Ziwei Zhang,Wenwu Zhu
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2022-07-25
卷期号:35 (7): 7328-7340
被引量:42
标识
DOI:10.1109/tkde.2022.3193725
摘要
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution. However, existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data. To solve this problem, in this work, we propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs. Our proposed OOD-GNN employs a novel nonlinear graph representation decorrelation method utilizing random Fourier features, which encourages the model to eliminate the statistical dependence between relevant and irrelevant graph representations through iteratively optimizing the sample graph weights and graph encoder. We further present a global weight estimator to learn weights for training graphs such that variables in graph representations are forced to be independent. The learned weights help the graph encoder to get rid of spurious correlations and, in turn, concentrate more on the true connection between learned discriminative graph representations and their ground-truth labels. We conduct extensive experiments to validate the out-of-distribution generalization abilities on two synthetic and 12 real-world datasets with distribution shifts. The results demonstrate that our proposed OOD-GNN significantly outperforms state-of-the-art baselines.
科研通智能强力驱动
Strongly Powered by AbleSci AI