计算机科学
计算机安全
入侵检测系统
背景(考古学)
稳健性(进化)
联合学习
人工智能
机器学习
生物化学
生物
基因
古生物学
化学
作者
Nguyen Chi Vy,Nguyen Huu Quyen,Phan The Duy,Van-Hau Pham
标识
DOI:10.1007/978-3-030-92708-0_8
摘要
The emerging of Federated Learning (FL) paradigm in training has been drawn much attention from research community because of the demand of privacy preservation in widespread machine learning adoption. This is more serious in the context of industrial Internet of Things (IIoT) with the distributed data resources and the sensitive local data in each data owner. FL in IIoT context can help to ensure the sensitive data from being exploited by adversaries while facilitating the acceptable performance by aggregating additional knowledge from distributed collaborators. Sharing the similar trend, Intrusion detection system (IDS) leveraging the FL approach can encourage the cooperation in building an efficient privacy-preserving solution among multiple participants owning the sensitive network data. But a rogue collaborator can manipulate the local dataset and send malicious updates to the model aggregation, aiming to reduce the global model’s prediction accuracy rate. The reason for this case is that the collaborator is a compromised participant, or due to the weak defenses of the local training device. This paper introduces a FL-based IDS, named Fed-IDS which facilitates collaborative training between many organizations to enhance their robustness against diverse and unknown attacks in the context of IIoT. Next, we perform the poisoning attack against such an IDS, including label-flipping strategy and Generative Adversarial Networks (GANs). Then, a validation approach is utilized as a countermeasure of rejecting the malicious updates to protect the global model from poisoning attacks. The experiments conducted on Kitsune, a real-world attack dataset, demonstrate the high effectiveness of the validation function in Fed-IDS framework against data poisoning.
科研通智能强力驱动
Strongly Powered by AbleSci AI