计算机科学
对抗制
散列函数
黑匣子
稳健性(进化)
情态动词
人工智能
理论计算机科学
计算机安全
基因
生物化学
化学
高分子化学
作者
Lei Zhu,Tianshi Wang,Jingjing Li,Zheng Zhang,Jialie Shen,Xinhua Wang
出处
期刊:ACM Transactions on Information Systems
日期:2022-09-03
卷期号:41 (3): 1-25
被引量:14
摘要
Deep cross-modal hashing retrieval models inherit the vulnerability of deep neural networks. They are vulnerable to adversarial attacks, especially for the form of subtle perturbations to the inputs. Although many adversarial attack methods have been proposed to handle the robustness of hashing retrieval models, they still suffer from two problems: (1) Most of them are based on the white-box settings, which is usually unrealistic in practical application. (2) Iterative optimization for the generation of adversarial examples in them results in heavy computation. To address these problems, we propose an Efficient Query-based Black-Box Attack (EQB 2 A) against deep cross-modal hashing retrieval, which can efficiently generate adversarial examples for the black-box attack. Specifically, by sending a few query requests to the attacked retrieval system, the cross-modal retrieval model stealing is performed based on the neighbor relationship between the retrieved results and the query, thus obtaining the knockoffs to substitute the attacked system. A multi-modal knockoffs-driven adversarial generation is proposed to achieve efficient adversarial example generation. While the entire network training converges, EQB 2 A can efficiently generate adversarial examples by forward-propagation with only given benign images. Experiments show that EQB 2 A achieves superior attacking performance under the black-box setting.
科研通智能强力驱动
Strongly Powered by AbleSci AI