计算机科学
对抗制
人工智能
图形
自然语言处理
机器学习
理论计算机科学
作者
Yixiang Dong,Minnan Luo,Jundong Li,Ziqi Liu,Qinghua Zheng
标识
DOI:10.1109/tkde.2024.3366396
摘要
Semi-supervised graph learning aims to improve learning performance by leveraging unlabeled nodes. Typically, it can be approached in two different ways, including predictive representation learning (PRL) where unlabeled data provide clues on input distribution and label-dependent regularization (LDR) which smooths the output distribution with unlabeled nodes to improve generalization. However, most existing PRL approaches suffer from overfitting in an end-to-end setting or cannot encode task-specific information when used as unsupervised pre-training (i.e., two-stage learning). Meanwhile, LDR strategies often introduce redundant and invalid data perturbations that can slow down and mislead the training. To address all these issues, we propose a general framework SemiGraL for semi-supervised learning on graphs, which bridges and facilitates both PRL and LDR in a single shot. By extending a contrastive learning architecture to the semi-supervised setting, we first develop a semi-supervised contrastive representation learning process with virtual adversarial augmentation to map input nodes into a label-preserving representation space while avoiding overfitting. We then introduce a multiview consistency classification process with well-constrained perturbations to achieve adversarially robust classification. Extensive experiments on seven semi-supervised node classification benchmark datasets show that SemiGraL outperforms various baselines while enjoying strong generalization and robustness performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI