判别式
计算机科学
邻接矩阵
特征学习
人工智能
图形
聚类分析
理论计算机科学
机器学习
模式识别(心理学)
作者
Xin Peng,Jieren Cheng,Xiangyan Tang,Jing‐Xin Liu,Jiahua Wu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-11
被引量:2
标识
DOI:10.1109/tnnls.2023.3244397
摘要
Graph representation is an important part of graph clustering. Recently, contrastive learning, which maximizes the mutual information between augmented graph views that share the same semantics, has become a popular and powerful paradigm for graph representation. However, in the process of patch contrasting, existing literature tends to learn all features into similar variables, i.e., representation collapse, leading to less discriminative graph representations. To tackle this problem, we propose a novel self-supervised learning method called dual contrastive learning network (DCLN), which aims to reduce the redundant information of learned latent variables in a dual manner. Specifically, the dual curriculum contrastive module (DCCM) is proposed, which approximates the node similarity matrix and feature similarity matrix to a high-order adjacency matrix and an identity matrix, respectively. By doing this, the informative information in high-order neighbors could be well collected and preserved while the irrelevant redundant features among representations could be eliminated, hence improving the discriminative capacity of the graph representation. Moreover, to alleviate the problem of sample imbalance during the contrastive process, we design a curriculum learning strategy, which enables the network to simultaneously learn reliable information from two levels. Extensive experiments on six benchmark datasets have demonstrated the effectiveness and superiority of the proposed algorithm compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI