后门
计算机科学
人工智能
人工神经网络
机器学习
深度学习
深层神经网络
计算机安全
作者
Rui Ning,Chunsheng Xin,Hongyi Wu
标识
DOI:10.1109/infocom48880.2022.9796878
摘要
While deep learning (DL)-based network traffic classification has demonstrated its success in a range of practical applications, such as network management and security control to just name a few, it is vulnerable to adversarial attacks. This paper reports TrojanFlow, a new and practical neural backdoor attack to DL-based network traffic classifiers. In contrast to traditional neural backdoor attacks where a designated and sample-agnostic trigger is used to plant backdoor, TrojanFlow poisons a model using dynamic and sample-specific triggers that are optimized to efficiently hijack the model. It features a unique design to jointly optimize the trigger generator with the target classifier during training. The trigger generator can thus craft optimized triggers based on the input sample to efficiently manipulate the model's prediction. A well-engineered prototype is developed using Pytorch to demonstrate TrojanFlow attacking multiple practical DL-based network traffic classifiers. Thorough analysis is conducted to gain insights into the effectiveness of TrojanFlow, revealing the fundamentals of why it is effective and what it does to efficiently hijack the model. Extensive experiments are carried out on the well-known ISCXVPN2016 dataset with three widely adopted DL network traffic classifier architectures. TrojanFlow is compared with two other backdoor attacks under five state-of-the-art backdoor defenses. The results show that the TrojanFlow attack is stealthy, efficient, and highly robust against existing neural backdoor mitigation schemes.
科研通智能强力驱动
Strongly Powered by AbleSci AI