计算机科学
对抗制
鞋跟
人工智能
语音识别
自然语言处理
机器学习
医学
解剖
作者
Sajal Aggarwal,Dinesh Kumar Vishwakarma
标识
DOI:10.1016/j.eswa.2024.124278
摘要
The accessibility of online hate speech has increased significantly, making it crucial for social-media companies to prioritize efforts to curb its spread. Although deep learning models demonstrate vulnerability to adversarial attacks, whether models fine-tuned for hate speech detection exhibit similar susceptibility remains underexplored. Textual adversarial attacks involve making subtle alterations to the original samples. These alterations are designed so that the adversarial examples produced can effectively deceive the target model, even when correctly classified by human observers. Though many approaches have been proposed to conduct word-level adversarial attacks on textual data, they face the obstacle of preserving the semantic coherence of texts during the generation of adversarial counterparts. Moreover, the adversarial examples produced are often easily distinguishable by human observers. This work presents a novel methodology that uses visually confusable glyphs and invisible characters to generate semantically and visually similar adversarial examples in a black-box setting. In the hate speech detection task context, our attack was effectively applied to several state-of-the-art deep learning models, fine-tuned on two benchmark datasets. The major contributions of this study are: (1) demonstrating the vulnerability of deep learning models fine-tuned for hate speech detection; (2) a novel attack framework based on a simple yet potent modification strategy; (3) superior outcomes in terms of accuracy degradation, attack success rate, average perturbation, semantic similarity, and perplexity when compared to existing baselines; (4) strict adherence to prescribed linguistic constraints while formulating adversarial samples; and (5) preservation of ground truth label while perturbing original input using imperceptible adversarial examples.
科研通智能强力驱动
Strongly Powered by AbleSci AI