对抗制
计算机科学
过度拟合
人工智能
深层神经网络
机器学习
稳健性(进化)
人工神经网络
推论
对抗性机器学习
正规化(语言学)
深度学习
自然语言处理
自然语言
基因
生物化学
化学
作者
Shreya Goyal,Sumanth Doddapaneni,Mitesh M. Khapra,Balaraman Ravindran
出处
期刊:ACM Computing Surveys
[Association for Computing Machinery]
日期:2023-04-20
卷期号:55 (14s): 1-39
被引量:34
摘要
In the past few years, it has become increasingly evident that deep neural networks are not resilient enough to withstand adversarial perturbations in input data, leaving them vulnerable to attack. Various authors have proposed strong adversarial attacks for computer vision and Natural Language Processing (NLP) tasks. As a response, many defense mechanisms have also been proposed to prevent these networks from failing. The significance of defending neural networks against adversarial attacks lies in ensuring that the model’s predictions remain unchanged even if the input data is perturbed. Several methods for adversarial defense in NLP have been proposed, catering to different NLP tasks such as text classification, named entity recognition, and natural language inference. Some of these methods not only defend neural networks against adversarial attacks but also act as a regularization mechanism during training, saving the model from overfitting. This survey aims to review the various methods proposed for adversarial defenses in NLP over the past few years by introducing a novel taxonomy. The survey also highlights the fragility of advanced deep neural networks in NLP and the challenges involved in defending them.
科研通智能强力驱动
Strongly Powered by AbleSci AI