计算机科学
对抗制
光学(聚焦)
分类学(生物学)
领域(数学分析)
安全域
人工智能
对抗性机器学习
数据科学
计算机安全
机器学习
数学分析
植物
物理
数学
光学
生物
作者
Panagiotis Bountakas,Apostolis Zarras,Alexios Lekidis,Christos Xenakis
标识
DOI:10.1016/j.cosrev.2023.100573
摘要
Adversarial Machine Learning (AML) is a recently introduced technique, aiming to deceive Machine Learning (ML) models by providing falsified inputs to render those models ineffective. Consequently, most researchers focus on detecting new AML attacks that can undermine existing ML infrastructures, overlooking at the same time the significance of defense strategies. This article constitutes a survey of the existing literature on AML attacks and defenses with a special focus on a taxonomy of recent works on AML defense techniques for different application domains, such as audio, cyber-security, NLP, and computer vision. The proposed survey also explores the methodology of the defense solutions and compares them using several criteria, such as whether they are attack- and/or domain-agnostic, deploy appropriate AML evaluation metrics, and whether they share their source code and/or their evaluation datasets. To the best of our knowledge, this article constitutes the first survey that seeks to systematize the existing knowledge focusing solely on the defense solutions against AML and providing innovative directions for future research on tackling the increasing threat of AML.
科研通智能强力驱动
Strongly Powered by AbleSci AI