计算机科学
推论
图形
人工神经网络
信息隐私
加密
机器学习
计算机安全
人工智能
理论计算机科学
作者
Lina Ge,YanKun Li,Haiao Li,Lei Tian,Zhe Wang
标识
DOI:10.1016/j.neucom.2024.128166
摘要
Graph neural networks are widely employed in diverse domains; however, they confront the peril of privacy infringement. To address this concern, federated learning emerges as a privacy-preserving approach that avoids sharing data for model training, effectively resolving the issue of privacy leakage in graph neural networks. The rapid advancement of federated neural networks has spurred the demand for more potent tools to enhance model performance owing to the concealed correlation information amongst federated learning participants. However, the structural attributes of federated graph neural networks render them vulnerable to inference attacks, reconstruction attacks, inversion attacks, and the like, potentially endangering privacy. This study delves into the intricacies of privacy-preserving within federated graph neural networks. Firstly, it introduces the architecture and variants of federated graph neural networks, analyzes the privacy risks encountered by these networks from four perspectives, and elucidates three primary attack methods. In accordance with the privacy-preserving mechanism of federated graph neural networks, it summarizes the privacy-preserving techniques and synthesizes the existing strategies from four perspectives: encryption methods, perturbation methods, anonymization, and hybrid methods. Furthermore, it summarily presents the prevailing framework for preserving privacy in neural networks. Ultimately, this paper examines the challenges and outlines future research directions pertaining to federated graph neural network technology.
科研通智能强力驱动
Strongly Powered by AbleSci AI