计算机科学
对抗制
图形
理论计算机科学
人工智能
作者
Xin Wang,Heng Chang,Beini Xie,Tian Bian,Shiji Zhou,Daixin Wang,Zhiqiang Zhang,Wenwu Zhu
出处
期刊:IEEE Transactions on Knowledge and Data Engineering
[Institute of Electrical and Electronics Engineers]
日期:2023-09-07
卷期号:: 1-12
被引量:4
标识
DOI:10.1109/tkde.2023.3313059
摘要
Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification and its diverse downstream real-world applications. Despite the huge success in learning graph representations, current GNN models have demonstrated their vulnerability to potentially existent adversarial examples on graph-structured data. Existing approaches are either limited to structure attacks or restricted to local informatio, urging for the design of a more general attack framework on graph classification, which faces significant challenges due to the complexity of generating local-node-level adversarial examples using the global-graph-level information. To address this ”global-to-local” attack challenge, we present a novel and general framework CAMA to generate adversarial examples via manipulating graph structure and node features. Specifically, we make use of Graph Class Activation Mapping and its variant to produce node-level importance corresponding to the graph classification task. Then through a heuristic design of algorithms, we can perform both feature and structure attacks under unnoticeable perturbation budgets with the help of both node-level and subgraph-level importance. Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI