计算机科学
最大熵
图形
特征学习
人工智能
理论计算机科学
节点(物理)
自然语言处理
计算机网络
频道(广播)
结构工程
盲信号分离
工程类
作者
Yanbei Liu,Wanjin Shan,Xiao Wang,Zhitao Xiao,Lei Geng,Fang Zhang,Dongdong Du,Yanwei Pang
标识
DOI:10.1016/j.patcog.2023.109907
摘要
Graph representation learning aims to learn low-dimensional representation for the graph, which has played a vital role in real-world applications. Without requiring additional labeled data, contrastive learning based graph representation learning (or graph contrastive learning) has attracted considerable attention. Recently, one of the most exciting advancement in graph contrastive learning is Deep Graph Infomax (DGI), which maximizes the Mutual Information (MI) between the node and graph representations. However, DGI only considers the contextual node information, ignoring the intrinsic node information (i.e., the similarity between node representations in different views). In this paper, we propose a novel Cross-scale Contrastive Triplet Networks (CCTN) framework, which captures both contextual and intrinsic node information for graph representation learning. Specifically, to obtain the contextual node information, we utilize an infomax contrastive network to maximize the MI between node and graph representations. For acquiring the intrinsic node information, we present a Siamese contrastive network by maximizing the similarity between node representations in different augmented views. Two contrastive networks learn together through a shared graph convolution network to form our cross-scale contrastive triplet networks. Finally, we evaluate CCTN on six real-world datasets. Extensive experimental results demonstrate that CCTN achieves state-of-the-art performance on node classification and clustering tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI