后门
计算机科学
图形
人工神经网络
人工智能
计算机安全
理论计算机科学
标识
DOI:10.1145/3627673.3679905
摘要
Graph Neural Networks (GNNs) have achieved remarkable success across various domains, yet recent studies have exposed their vulnerability to backdoor attacks. Backdoor attacks inject triggers into the training set to poison the model, with adversaries typically relabeling training samples with backdoor triggers to a target label. This leads a GNN trained on the poisoned dataset to misclassify any test sample containing the backdoor trigger as the target label. However, relabeling not only increases the cost of the attack but also raises the risk of detection. Therefore, our study focuses on clean-label backdoor attacks, which do not require modify the labels of trigger-attached samples in the training phase. Specifically, we employ a novel method to select effective poisoned samples belonging to the target class. An adaptive trigger generator is furthest deployed to high attack success rates under a small backdoor budget. Our experiments on four public datasets validate the effectiveness of our proposed attack.
科研通智能强力驱动
Strongly Powered by AbleSci AI