计算机科学
人工智能
图形
特征学习
机器学习
无监督学习
理论计算机科学
标识
DOI:10.1007/978-3-031-30675-4_25
摘要
In recent years, contrastive learning has emerged as a successful method for unsupervised graph representation learning. It generates two or more different views by data augmentation and maximizes the mutual information between the views. Prior approaches usually adopt naive data augmentation strategies or ignore the rich global information of the graph structure, leading to suboptimal performance. This paper proposes a contrast-based unsupervised graph representation learning framework, MPGCL. Since data augmentation is the key to contrastive learning, this paper proposes constructing higher-order networks by injecting similarity-based global information into the original graph. Then, adaptive and random augmentation strategies are combined to generate two views with complementary semantic information, which preserve important semantic information while not being too similar. In addition, the previous methods only consider the same nodes as positive samples. In this paper, the positive samples are identified by capturing global information. In extensive experiments on eight real benchmark datasets, MPGCL outperforms both the SOTA unsupervised competitors and the fully supervised methods on the downstream task of node classification. The code is available at: https://github.com/asfdd3/-miao/tree/src/MPGCL .
科研通智能强力驱动
Strongly Powered by AbleSci AI