对抗制
计算机科学
人工智能
自然语言
深度学习
自然(考古学)
领域(数学)
深层神经网络
人工神经网络
自然语言处理
图像(数学)
机器学习
数学
历史
考古
纯数学
作者
Yu Zhang,Kun Shao,Junan Yang,Hui Liu
出处
期刊:2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)
日期:2021-10-15
卷期号:: 1281-1285
被引量:6
标识
DOI:10.1109/itnec52019.2021.9587104
摘要
Deep neural networks (DNNs) have achieved remarkable success in various tasks such as image classification, speech recognition, and natural language processing. However, DNNs have proven to be vulnerable to attacks from adversarial examples. These samples are generated by adding some imperceptible disturbances, which are used to mislead the output decision of the deep learning model and bring significant security risks to the system. However, previous research mainly focused on computer vision, thus neglecting the security issues of natural language processing models. Since the text data is discrete, the existing methods in the image field cannot directly use the text. This article summarized the research on adversarial attacks and defenses in natural language processing and looked forward to future research directions.
科研通智能强力驱动
Strongly Powered by AbleSci AI