入侵检测系统
计算机科学
背景(考古学)
异常检测
机器学习
人工智能
弹丸
攻击模式
标记数据
星团(航天器)
入侵
一次性
训练集
数据挖掘
计算机安全
计算机网络
工程类
机械工程
古生物学
化学
有机化学
地球化学
生物
地质学
作者
Nour Alhussien,Ahmed Aleroud
标识
DOI:10.1109/noms56928.2023.10154453
摘要
With the advancement of Machine Learning (ML) algorithms, more organizations started using Machine Learning based Intrusion Detection Systems (ML-IDSs) to mitigate cyberattacks. However, the lack of training datasets is a major challenge when creating those systems. Therefore, using pre-trained models and small amount of labeled network data or few-shots from internal sources are possible solutions to overcome this challenge. However, using pretrained models or external datasets introduces the risk of poisoned machine learning models. This work investigates a novel poisoning attack that creates a diverse mini cluster of attacks and normal instances around an attack instance, then use the instances in that cluster to poison that instance. The poisoned instances are then injected into training data. A trained model is then created by projecting a labeled data from a poisoned source and the few labeled shots from the target organization. An anomaly-based intrusion detection model is utilized to examine the effectiveness of the introduced approach under the proposed poisoning attack. The results have shown that the attack is effective in the context of few-shot IDS learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI