计算机科学
判别式
人工智能
半监督学习
机器学习
图形
特征学习
节点(物理)
正规化(语言学)
监督学习
利用
中心性
模式识别(心理学)
人工神经网络
理论计算机科学
数学
计算机安全
结构工程
组合数学
工程类
作者
Mohadeseh Ghayekhloo,Ahmad Nickabadi
标识
DOI:10.1016/j.neucom.2024.127710
摘要
Graph Neural Networks (GNNs) have exhibited significant success in various applications, but they face challenges when labeled nodes are limited. A novel self-supervised learning paradigm has emerged, enabling GNN training without labeled nodes and even surpassing GNNs with limited labeled data. However, self-supervised methods lack class-discriminative node representations due to the absence of labeled information during training. In this paper, we exploit a supervised graph contrastive learning approach (SGCL1) framework to tackle the issue of limited labeled nodes, ensuring coherent grouping of nodes within the same class. We introduce augmentation techniques based on a novel centrality function to highlight important topological structures. Additionally, we inject noise into less informative node features, compelling the model to extract underlying semantic information. Our approach combines supervised contrastive loss and node similarity regularization while achieving consistent grouping of unlabeled nodes with labeled ones. Furthermore, we utilize the pseudo-labeling technique to propagate label information to distant nodes and address the underfitting problem, especially with low-degree nodes. Experimental results on real-world graphs demonstrate that SGCL outperforms both semi-supervised and self-supervised methods in node classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI