计算机科学
质心
聚类分析
图形
代表(政治)
人工智能
特征学习
模式识别(心理学)
采样(信号处理)
差异(会计)
机器学习
数据挖掘
理论计算机科学
探测器
政治学
法学
政治
电信
会计
业务
作者
Rong Yan,Peng Bao,Xiao Zhang,Zhongyi Liu,Hui Liu
标识
DOI:10.1145/3616855.3635789
摘要
Graph Contrastive Learning (GCL) methods benefit from two key properties: alignment and uniformity, which encourage the representation of related objects together while pushing apart different objects. Most GCL methods aim to preserve alignment and uniformity through random graph augmentation strategies and indiscriminately negative sampling. However, their performance is highly sensitive to graph augmentation, which requires cumbersome trial-and-error and expensive domain-specific knowledge as guidance. Besides, these methods perform negative sampling indiscriminately, which inevitably suffers from sampling bias, i.e., negative samples from the same class as the anchor. To remedy these issues, we propose a unified GCL framework towards Alignment-Uniformity Aware Representation learning (AUAR), which can achieve better alignment while improving uniformity without graph augmentation and negative sampling. Specifically, we propose intra- and inter-alignment loss to align the representations of the node with itself and its cluster centroid to maintain label-invariant. Furthermore, we introduce a uniformity loss with theoretical analysis, which pushes the representations of unrelated nodes from different classes apart and tends to provide informative variance from different classes. Extensive experiments demonstrate that our method gains better performance than existing GCL methods in node classification and clustering tasks across three widely-used datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI