对抗制
计算机科学
同种类的
水准点(测量)
深层神经网络
人工神经网络
图形
理论计算机科学
GSM演进的增强数据速率
节点(物理)
计算机安全
人工智能
数学
结构工程
大地测量学
组合数学
工程类
地理
作者
Udesh Kumarasinghe,Mohamed Nabeel,Kasun De Zoysa,Kasun Gunawardana,Charitha Elvitigala
标识
DOI:10.1109/icdmw58026.2022.00096
摘要
Graph neural networks (GNNs) have achieved re-markable success in many application domains including drug discovery, program analysis, social networks, and cyber security. However, it has been shown that they are not robust against adversarial attacks. In the recent past, many adversarial attacks against homogeneous GNNs and defenses have been proposed. However, most of these attacks and defenses are ineffective on heterogeneous graphs as these algorithms optimize under the assumption that all edge and node types are of the same and further they introduce semantically incorrect edges to perturbed graphs. Here, we first develop, HetePR-BCD, a training time (i.e. poisoning) adversarial attack on heterogeneous graphs that outperforms the start of the art attacks proposed in the literature. Our experimental results on three benchmark heterogeneous graphs show that our attack, with a small perturbation budget of 15 %, degrades the performance up to 32 % (Fl score) compared to existing ones. It is concerning to mention that existing defenses are not robust against our attack. These defenses primarily modify the GNN's neural message passing operators assuming that adversarial attacks tend to connect nodes with dissimilar features, but this assumption does not hold in heterogeneous graphs. We construct HeteroGuard, an effective defense against training time attacks including HetePR-BCD on heterogeneous models. HeteroGuard outperforms the existing defenses by 3–8 % on Fl score depending on the benchmark dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI