计算机科学
人工智能
分类器(UML)
二进制数
二元分类
图形
机器学习
模式识别(心理学)
特征学习
理论计算机科学
支持向量机
数学
算术
作者
Yufei Jin,Xingquan Zhu
标识
DOI:10.1109/bigdata55660.2022.10020970
摘要
Graph Contrastive Learning (GCL) has recently emerged to leverage contrastive loss as a pseudo-supervision signal for self-supervised learning. In order to introduce contrastive learning loss to graphs, existing GCL methods mostly focus on leveraging network topology or node similarity to classify a pair of nodes as same/different node pairs or close/distant node pairs. In this paper, we propose a semi-supervised graph contrastive learning framework, pmGCL, leveraging GCL to augment the performance of a classifier through a predictive masking approach. Specifically, a classifier is trained using a small number of labeled nodes to predict node labels. The label prediction results are then transformed into a binary prediction of whether two nodes have the same label or not for all node pairs. The converted result, serving as a binary masking matrix, will help the succeeding GCL learning to learn to pull nodes likely belonging to the same class to be closer and push the ones belonging to different classes to be further away from each other. Experiments and comparisons, with respect to different benchmark networks and label percentages, show that pmGCL consistently outperforms rival graph convolution neural network (GCN) and GCL baseline with a simple constraint posed on the problem.
科研通智能强力驱动
Strongly Powered by AbleSci AI