概化理论
计算机科学
机器学习
图形
人工智能
稳健性(进化)
卷积神经网络
无监督学习
参数化复杂度
标记数据
理论计算机科学
特征学习
知识图
算法
数学
统计
基因
生物化学
化学
作者
Yuning You,Tianlong Chen,Yongduo Sui,Ting Chen,Zhangyang Wang,Yang Shen
出处
期刊:Neural Information Processing Systems
日期:2020-01-01
卷期号:33: 5812-5823
被引量:106
摘要
Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at this https URL.
科研通智能强力驱动
Strongly Powered by AbleSci AI