计算机科学
机器学习
人工智能
图形
学习迁移
瓶颈
标记数据
任务(项目管理)
无监督学习
理论计算机科学
管理
经济
嵌入式系统
作者
Zihao Chen,Ying Wang,Fuyuan Ma,Hao Yuan,Xin Wang
标识
DOI:10.1016/j.knosys.2024.111391
摘要
Despite the impressive results achieved in many areas of graph machine learning, through graph representation learning using supervised learning techniques, the limited availability of labeled training data has led to a bottleneck in terms of performance. To address this challenge, transfer learning has been proposed as an effective solution. It involves designing pre-training methods in an unsupervised manner to learn representations, which are then adapted to downstream tasks with limited labeled data. However, transfer learning can suffer from negative transfer when there is a major gap between the objectives of pre-training and the downstream tasks. To overcome these challenges, we introduce a novel framework, graph prompt learning-graph neural network (GPL-GNN), to narrow the gap between different tasks. GPL-GNN employs unsupervised methods, which require no labeled data, and incorporates unsupervised pre-trained structural representations into downstream tasks as prompt information. This information is combined with downstream data to train GNNs adapting them to the downstream tasks, and resulting in more adaptive, task-specific representations. Furthermore, the ability of GPL-GNN to learn graph representations without the constraints of pre-training and fine-tuning for model consistency increases the flexibility in choosing task-specific GNNs. In addition, the introduction of prototype networks as classification heads enables quick adaptation of GPL-GNNs to downstream tasks. Finally, we conduct extensive experiments on a benchmark dataset to demonstrate the effectiveness of GPL-GNN. The code is available in: https://github.com/chenzihaoww/GPL-GNN.
科研通智能强力驱动
Strongly Powered by AbleSci AI