子空间拓扑
成对比较
图形
计算机科学
模式识别(心理学)
人工智能
秩(图论)
稀疏逼近
线性子空间
理论计算机科学
节点(物理)
算法
数学
组合数学
几何学
结构工程
工程类
作者
Gehang Zhang,Jiawei Sheng,Shicheng Wang,Tingwen Liu
标识
DOI:10.1109/icassp48485.2024.10445929
摘要
Graph contrastive learning aims to learn a representative model by maximizing the agreement between different views of the same graph. Existing studies usually allow multifarious noise in data augmentation, and suffer from trivial and inconsistent generation of graph views. Moreover, they mostly impose contrastive constraints on pairwise representations, limiting the structural correlations among multiple nodes. Both problems may hinder graph contrastive learning, leading to suboptimal node representations. To this end, we propose a novel graph contrastive learning framework, namely GCL-LS, via low-rank and sparse subspace decomposition. In particular, it decomposes node representations into low-rank and sparse components, preserving structural correlations and compressed features in the low-rank and sparse subspace, respectively. By contrasting the representations in the subspaces, it naturally disentangles low-quality noise in data augmentation, and captures structural correlations and substantial features of nodes in contrastive learning. Experimental results show that our method significantly improves downstream node classification accuracy, and further analysis demonstrates the effectiveness of the subspace decomposition in graph contrastive learning. The code is released at https://github.com/zhanggehang/GCL_LS.
科研通智能强力驱动
Strongly Powered by AbleSci AI