异步通信
计算机科学
图形
人工神经网络
分布式计算
人工智能
理论计算机科学
机器学习
计算机网络
作者
Yuanming Liao,Duanji Wu,Pengyu Lin,Kun Guo
出处
期刊:Communications in computer and information science
日期:2024-01-01
卷期号:: 378-392
标识
DOI:10.1007/978-981-99-9637-7_28
摘要
Graph neural networks have shown excellent performance in many fields owing to their powerful processing ability of graph data. In recent years, federated graph neural network has become a reasonable solution due to the enactment of privacy-related regulations. However, frequent communication between the coordinator and participants in federated graph neural network results in longer model training time and consumes many communication resources. To address this challenge, in this paper, we propose a novel semi-asynchronous federated graph learning communication protocol that simultaneously alleviates the negative impact of stragglers(slow participants) and accelerate the training process in the unsupervised federated graph neural network scenario. First, the weighted enforced synchronization strategy is intended to preserve the information carried by stragglers while preventing their stale models from harming the global model update. Second, the adaptive local update strategy is developed to make the local model of the participant with poor computing performance as close as possible to the global model. Experiments combine federated learning with graph contrastive learning. The results demonstrate that our proposed protocol outperforms the existing protocols in real-world networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI