相互信息
计算机科学
特征学习
人工智能
图形
理论计算机科学
作者
Yuhua Xu,Junli Wang,Mingjian Guang,Chungang Yan,Changjun Jiang
标识
DOI:10.1016/j.ins.2024.120378
摘要
Graph contrastive learning has achieved rapid development in learning representations from graph-structured data, which aims to maximize the mutual information between two representations learned from different augmented views of a graph. However, maximizing the mutual information between different views without any constraints may cause encoders to capture information irrelevant to downstream tasks, limiting the efficiency of graph contrastive learning methods. To tackle these issues, we propose a Graph Contrastive Learning method with Min-max mutual Information (GCLMI). Specifically, we conduct theoretical analysis to present our learning objective. It designs a min-max principle to constrain the mutual information among multiple views, including between a graph and each of its augmented views, as well as between different augmented views. Based on the learning objective, we further construct two augmented views by separating the feature and topology information of a graph to preserve different semantic information from the graph. Subsequently, we maximize the mutual information between each augmented view and the graph while minimizing the mutual information between two augmented views, to learn informative and diverse representations. Extensive experiments are conducted on a variety of graph datasets, and experimental results show that GCLMI achieves better or competitive performance compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI