对抗制
计算机科学
水准点(测量)
黑匣子
脆弱性(计算)
深层神经网络
领域(数学)
深度学习
人工智能
钥匙(锁)
人工神经网络
机器学习
对抗性机器学习
数据挖掘
计算机安全
大地测量学
数学
纯数学
地理
作者
Yonghao Xu,Pedram Ghamisi
标识
DOI:10.1109/tgrs.2022.3156392
摘要
Deep neural networks have achieved great success in many important remote sensing tasks. Nevertheless, their vulnerability to adversarial examples should not be neglected. In this study, we systematically analyze the universal adversarial examples in remote sensing data for the first time, without any knowledge from the victim model. Specifically, we propose a novel black-box adversarial attack method, namely Mixup-Attack, and its simple variant Mixcut-Attack, for remote sensing data. The key idea of the proposed methods is to find common vulnerabilities among different networks by attacking the features in the shallow layer of a given surrogate model. Despite their simplicity, the proposed methods can generate transferable adversarial examples that deceive most of the state-of-the-art deep neural networks in both scene classification and semantic segmentation tasks with high success rates. We further provide the generated universal adversarial examples in the dataset named UAE-RS, which is the first dataset that provides black-box adversarial samples in the remote sensing field. We hope UAE-RS may serve as a benchmark that helps researchers to design deep neural networks with strong resistance toward adversarial attacks in the remote sensing field. Codes and the UAE-RS dataset are available online (https://github.com/YonghaoXu/UAE-RS).
科研通智能强力驱动
Strongly Powered by AbleSci AI