过度拟合
计算机科学
对抗制
可转让性
噪音(视频)
人工智能
机器学习
深层神经网络
样品(材料)
采样(信号处理)
数据挖掘
深度学习
人工神经网络
图像(数学)
计算机视觉
罗伊特
滤波器(信号处理)
化学
色谱法
作者
Jiahao Huang,Mi Wen,Minjie Wei,Yanbing Bi
标识
DOI:10.1016/j.cose.2023.103541
摘要
Deep neural networks have achieved remarkable success in the field of computer vision. However, they are susceptible to adversarial attacks. The transferability of adversarial samples has made practical black-box attacks feasible, underscoring the importance of research on transferability. Existing work indicates that adversarial samples tend to overfit to the source model, getting trapped in local optima, thereby reducing the transferability of adversarial samples. To address this issue, we propose the Random Noise Transfer Attack (RNTA) to search for adversarial samples in a larger data distribution, seeking the global optimum. Specifically, we suggest injecting multiple random noise perturbations into the sample before each iteration of sample optimization, effectively exploring the decision boundary within an extended data distribution space. By aggregating gradients, we identify a better global optimum, mitigating the issue of overfitting to the source model. Through extensive experiments on the large-scale visual classification task on ImageNet, we demonstrate that our method increases the success rate of momentum-based attacks by an average of 20.1%. Furthermore, our approach can be combined with existing attack methods, achieving a success rate of 94.3%, which highlights the insecurity of current models and defense mechanisms.
科研通智能强力驱动
Strongly Powered by AbleSci AI