成对比较
计算机科学
理论计算机科学
人工智能
图形
熵(时间箭头)
人工神经网络
相互信息
机器学习
模式识别(心理学)
物理
量子力学
作者
Yixuan Ma,Xiaolin Zhang,Peng Zhang,Kun Zhan
标识
DOI:10.1145/3581783.3612047
摘要
Contrastive learning on graphs aims at extracting distinguishable high-level representations of nodes. We theoretically illustrate that the entropy of a dataset is approximated by maximizing the lower bound of the mutual information across different views of a graph, i.e., entropy is estimated by a neural network. Based on this finding, we propose a simple yet effective subset sampling strategy to contrast pairwise representations between views of a dataset. In particular, we randomly sample nodes and edges from a given graph to build the input subset for a view. Two views are fed into a parameter-shared Siamese network to extract the high-dimensional embeddings and estimate the information entropy of the entire graph. For the learning process, we propose to optimize the network using two objectives, simultaneously. Concretely, the input of the contrastive loss consists of positive and negative pairs. Our selection strategy of pairs is different from previous works and we present a novel strategy to enhance the representation ability by selecting nodes based on cross-view similarities. We enrich the diversity of the positive and negative pairs by selecting highly similar samples and totally different data with the guidance of cross-view similarity scores, respectively. We also introduce a cross-view consistency constraint on the representations generated from the different views. We conduct experiments on seven graph benchmarks, and the proposed approach achieves competitive performance compared to the current state-of-the-art methods. The source code is available at https://github.com/kunzhan/M-ILBO.
科研通智能强力驱动
Strongly Powered by AbleSci AI