对抗制
计算机科学
人工智能
深度学习
稳健性(进化)
机器学习
深层神经网络
模式识别(心理学)
背景(考古学)
上下文图像分类
人工神经网络
图像(数学)
高光谱成像
地理
基因
生物化学
考古
化学
作者
Yonghao Xu,Bo Du,Liangpei Zhang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:30: 8671-8685
被引量:62
标识
DOI:10.1109/tip.2021.3118977
摘要
Deep learning models have shown their great capability for the hyperspectral image (HSI) classification task in recent years. Nevertheless, their vulnerability towards adversarial attacks could not be neglected. In this study, we systematically analyze the influence of adversarial attacks on the HSI classification task for the first time. While existing research of adversarial attacks focuses on the generation of adversarial examples in the RGB domain, the experiments in this study show such adversarial examples could also exist in the hyperspectral domain. Although the difference between the generated adversarial image and the original hyperspectral data is imperceptible to the human visual system, most of the existing state-of-the-art deep learning models could be fooled by the adversarial image to make wrong predictions. To address this challenge, a novel self-attention context network (SACNet) is further proposed. We discover that the global context information contained in HSI can significantly improve the robustness of deep neural networks when confronted with adversarial attacks. Extensive experiments on three benchmark HSI datasets demonstrate that the proposed SACNet possesses stronger resistibility towards adversarial examples compared with the existing state-of-the-art deep learning models.
科研通智能强力驱动
Strongly Powered by AbleSci AI