对抗制
计算机科学
灵敏度(控制系统)
人工智能
可微函数
机器学习
最大值和最小值
迭代法
脆弱性(计算)
数学优化
算法
数学
计算机安全
工程类
数学分析
电子工程
作者
Elad Sofer,Nir Shlezinger
标识
DOI:10.1109/mlsp55844.2023.10285957
摘要
Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoticeable perturbations. These attacks are typically considered to be an ML risk, often associated with the black-box operation and sensitivity to features learned from data of deep neural networkss (DNNs), and are rarely viewed as a threat to classic non-learned decision rules, such as iterative optimizers. In this work we explore the sensitivity to adversarial examples of iterative optimizers, building upon recent advances in treating these methods as ML models. We identify that many iterative optimizers share the properties of end-to-end differentiability and existence of impactful small perturbations, that make them amenable to adversarial attacks. The interpretablity of iterative optimizers allows to associate adversarial examples with modifications to the traversed loss surface that notably affect the location of the sought minima. We visualize this effect and demonstrate the vulnerability of iterative optimizers for compressed sensing and hybrid beamforming tasks, showing that different optimizers tackling the same optimization formulation vary in their adversarial sensitivity.
科研通智能强力驱动
Strongly Powered by AbleSci AI