对抗制
计算机科学
稳健性(进化)
计算机安全
功能(生物学)
人工智能
逃避(道德)
机器学习
进化生物学
生物化学
化学
免疫系统
生物
免疫学
基因
作者
Peishuai Sun,Shuhao Li,Jiang Xie,Hongbo Xu,Zhenyu Cheng,Rong Yang
标识
DOI:10.1016/j.cose.2023.103257
摘要
Machine learning (ML) is increasingly used for malicious traffic detection and proven to be effective. However, ML-based detections are at risk of being deceived by adversarial examples. It is critical to carry out adversarial attacks to evaluate the robustness of detections. Some research papers have studied adversarial attacks on ML-based detections, while most of them are in unreal scenarios. It mainly includes two aspects: (i) adversarial attacks gain extra prior knowledge about ML-based models, such as the datasets and features used by the model, which are unlikely to be available in reality; (ii) adversarial attacks generate unpractical examples, which are traffic features or traffic that doesn’t compliance with communication protocol rules. In this paper, we propose an adversarial attack framework GPMT, which generates practical adversarial malicious traffic to deceive the ML-based detection. Compared with previous work, our work mainly has the following advantages: (i) little prior knowledge: we limit the possessed prior knowledge to simulate black-box attacks for real situations; (ii) more adversarial and practical examples: we employ Wasserstein GAN (WGAN) to execute adversarial attacks and design a novel loss function, which generates practical adversarial examples that are more likely to deceive detections. We attack nine ML-based models in the CTU-13 dataset to demonstrate the framework’s validity. Experimental results show that GPMT is more effective and versatile than other methods. For nine models, mean evasion increase rate (EIR) can reach 65.53%, which is 16.48% higher than the best of related methods, DIGFuPAS. In addition, we test other datasets to verify the generality of the framework. The experiment shows that our attack is equally applicable.
科研通智能强力驱动
Strongly Powered by AbleSci AI