计算机科学
散列函数
语义学(计算机科学)
对抗制
人工智能
深度学习
情态动词
理论计算机科学
计算机安全
程序设计语言
化学
高分子化学
作者
Tianshi Wang,Lei Zhu,Zheng Zhang,Huaxiang Zhang,Junwei Han
出处
期刊:IEEE Transactions on Circuits and Systems for Video Technology
[Institute of Electrical and Electronics Engineers]
日期:2023-10-01
卷期号:33 (10): 6159-6172
被引量:2
标识
DOI:10.1109/tcsvt.2023.3263054
摘要
Deep cross-modal hashing has achieved excellent retrieval performance with the powerful representation capability of deep neural networks. Regrettably, current methods are inevitably vulnerable to adversarial attacks, especially well-designed subtle perturbations that can easily fool deep cross-modal hashing models into returning irrelevant or the attacker’s specified results. Although adversarial attacks have attracted increasing attention, there are few studies on specialized attacks against deep cross-modal hashing. To solve these issues, we propose a targeted adversarial attack method against deep cross-modal hashing retrieval in this paper. To the best of our knowledge, this is the first work in this research field. Concretely, we first build a progressive fusion module to extract fine-grained target semantics through a progressive attention mechanism. Meanwhile, we design a semantic adaptation network to generate the target prototype code and reconstruct the category label, thus realizing the semantic interaction between the target semantics and the implicit semantics of the attacked model. To bridge modality gaps and preserve local example details, a semantic translator seamlessly translates the target semantics and then embeds them into benign examples in collaboration with a U-Net framework. Moreover, we construct a discriminator for adversarial training, which enhances the visual realism and category discrimination of adversarial examples, thus improving their targeted attack performance. Extensive experiments on widely tested cross-modal retrieval datasets demonstrate the superiority of our proposed method. Also, transferable attacks show that our generated adversarial examples have well generalization capability on targeted attacks. The source codes and datasets are available at https://github.com/tswang0116/TA-DCH .
科研通智能强力驱动
Strongly Powered by AbleSci AI