对抗制
计算机科学
分类学(生物学)
可视化
深度学习
领域(数学)
数据科学
构造(python库)
人工智能
机器学习
植物
生物
数学
纯数学
程序设计语言
作者
Teng Long,Qi Gao,Lili Xu,Zhangbing Zhou
标识
DOI:10.1016/j.cose.2022.102847
摘要
• Classical approaches to taxonomy-based adversarial attacks are extensively discussed. • Based on the extended taxonomy, some recent popular adversarial attack methods are introduced and analyzed. • A knowledge graph is established, and based on this, the hotspots of related work are visualized and analyzed. • Future research directions are proposed to further improve adversarial attacks in the field of AI security. Deep learning has been widely applied in various fields such as computer vision, natural language processing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, resulting in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowledge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: https://github.com/NanyunLengmu/Adversarial-Attack-Visualization .
科研通智能强力驱动
Strongly Powered by AbleSci AI