计算机科学
差别隐私
同态加密
加密
信息敏感性
信息泄露
计算机安全
私人信息检索
图形
节点(物理)
理论计算机科学
数据挖掘
计算机网络
结构工程
工程类
作者
Li Zhou,Li Wang,Dongmei Fan,Haifeng Zhang,Kai Zhong
标识
DOI:10.1016/j.physa.2023.129187
摘要
Graph neural networks (GNNs) can learn the node representations to capture both node features and graph topology information through the message passing mechanism. However, since the information collected by GNNs is often used without authorization or maliciously attacked by hackers, which may result in leakage of users' private information. To this end, we propose a privacy preserving GNNs framework, which not only protects the attribute privacy but also performs well in various downstream tasks. Specifically, when the users communicate with the third party, Paillier homomorphic encryption (HE) is used to encrypt users' sensitive attribute information to prevent privacy leakage. Considering that the third party may be untrustworthy, differential privacy (DP) with Laplace mechanism is carried out to add noise to sensitive attribute information before transmission, so that the real attribute information is not accessible to the third party. Subsequently, the third party trains the GNNs model by using both the privacy preserving attribute information and public network topology information. Extensive experimental results show that, compared with the state-of-the-art methods, the privacy preserving GNNs still achieves satisfactory performance regarding different downstream tasks, such as node classification and link prediction while protecting the sensitive attributes of individuals.
科研通智能强力驱动
Strongly Powered by AbleSci AI