计算机科学
判别式
人工智能
样品(材料)
图形
机器学习
数据挖掘
模式识别(心理学)
自然语言处理
理论计算机科学
化学
色谱法
作者
Wenxuan Tu,Sihang Zhou,Xinwang Liu,Chunpeng Ge,Zhiping Cai,Yue Liu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2023-08-17
卷期号:: 1-14
被引量:9
标识
DOI:10.1109/tnnls.2023.3297607
摘要
Contrastive learning has recently emerged as a powerful technique for graph self-supervised pretraining (GSP). By maximizing the mutual information (MI) between a positive sample pair, the network is forced to extract discriminative information from graphs to generate high-quality sample representations. However, we observe that, in the process of MI maximization (Infomax), the existing contrastive GSP algorithms suffer from at least one of the following problems: 1) treat all samples equally during optimization and 2) fall into a single contrasting pattern within the graph. Consequently, the vast number of well-categorized samples overwhelms the representation learning process, and limited information is accumulated, thus deteriorating the learning capability of the network. To solve these issues, in this article, by fusing the information from different views and conducting hard sample mining in a hierarchically contrastive manner, we propose a novel GSP algorithm called hierarchically contrastive hard sample mining (HCHSM). The hierarchical property of this algorithm is manifested in two aspects. First, according to the results of multilevel MI estimation in different views, the MI-based hard sample selection (MHSS) module keeps filtering the easy nodes and drives the network to focus more on hard nodes. Second, to collect more comprehensive information for hard sample learning, we introduce a hierarchically contrastive scheme to sequentially force the learned node representations to involve multilevel intrinsic graph features. In this way, as the contrastive granularity goes finer, the complementary information from different levels can be uniformly encoded to boost the discrimination of hard samples and enhance the quality of the learned graph embedding. Extensive experiments on seven benchmark datasets indicate that the HCHSM performs better than other competitors on node classification and node clustering tasks. The source code of HCHSM is available at https://github.com/WxTu/HCHSM.
科研通智能强力驱动
Strongly Powered by AbleSci AI