遮罩(插图)
机制(生物学)
计算机科学
计算机安全
物理
艺术
量子力学
视觉艺术
作者
Chenyang Chen,Xiaoyu Zhang,Hongbo Qiu,Jian Lou,Zhengyang Liu,Xiaofeng Chen
标识
DOI:10.1016/j.ins.2024.120579
摘要
Graph neural networks (GNNs) have demonstrated remarkable performance in diverse graph-related tasks, including node classification, graph classification, link prediction, etc. Previous research has indicated that GNNs are vulnerable to membership inference attacks (MIA). These attacks enable malevolent parties to deduce whether the data points are part of the training set by identifying the output distribution, giving rise to noteworthy privacy apprehensions, especially when the graph contains sensitive data. There have been some studies to defend against graph MIA so far, but they have issues like high computational cost and decreased model accuracy. In this paper, we introduce a novel defense framework called MaskArmor, designed to bolster the privacy and security of GNNs against MIA. The MaskArmor framework encompasses four distinct masking strategies: AdjMask, DTMask, ATMask, and SigMask. These strategies leverage message-passing mechanisms, distillation temperature, hybrid masking, and the Sigmoid function, respectively. The MaskArmor framework effectively obscures the distribution of the model on both the training and non-training samples, rendering it challenging for attackers to ascertain whether particular samples have undergone training. Additionally, MaskArmor sustains the model's precision with negligible computational overhead. Our experiments are implemented across seven benchmark datasets and four GNN networks against shadow-based and threshold-based MIAs, showcasing that MaskArmor substantially heightens GNNs' resilience against MIA while simultaneously preserving accuracy on the initial tasks. It also demonstrates adeptness in countering threshold-based MIA through strategies like AdjMask and ATMask. Exhaustive experimental results substantiate that MaskArmor outperforms alternative existing approaches, maintaining effectiveness and applicability across diverse datasets and attack scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI