对抗制
计算机科学
深度学习
人工智能
深层神经网络
人工神经网络
判别式
脆弱性(计算)
水准点(测量)
机器学习
软件部署
计算机安全
地理
大地测量学
操作系统
作者
Yonghao Xu,Bo Du,Liangpei Zhang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2021-02-01
卷期号:59 (2): 1604-1617
被引量:77
标识
DOI:10.1109/tgrs.2020.2999962
摘要
Deep neural networks, which can learn the representative and discriminative features from data in a hierarchical manner, have achieved state-of-the-art performance in the remote sensing scene classification task. Despite the great success that deep learning algorithms have obtained, their vulnerability toward adversarial examples deserves our special attention. In this article, we systematically analyze the threat of adversarial examples on deep neural networks for remote sensing scene classification. Both targeted and untargeted attacks are performed to generate subtle adversarial perturbations, which are imperceptible to a human observer but may easily fool the deep learning models. Simply adding these perturbations to the original high-resolution remote sensing (HRRS) images, adversarial examples can be generated, and there are only slight differences between the adversarial examples and the original ones. An intriguing discovery in our study shows that most of these adversarial examples may be misclassified into the wrong category by the state-of-the-art deep neural networks with very high confidence. This phenomenon, undoubtedly, may limit the practical deployment of these deep learning models in the safety-critical remote sensing field. To address this problem, the adversarial training strategy is further investigated in this article, which significantly increases the resistibility of deep models toward adversarial examples. Extensive experiments on three benchmark HRRS image data sets demonstrate that while most of the well-known deep neural networks are sensitive to adversarial perturbations, the adversarial training strategy helps to alleviate their vulnerability toward adversarial examples.
科研通智能强力驱动
Strongly Powered by AbleSci AI