对抗制
计算机科学
稳健性(进化)
对抗性机器学习
脆弱性(计算)
可靠性(半导体)
数据建模
人工智能
机器学习
电力系统
节点(物理)
数据挖掘
计算机安全
功率(物理)
工程类
生物化学
化学
物理
量子力学
数据库
基因
结构工程
作者
Rong Huang,Yuancheng Li
出处
期刊:IEEE Transactions on Smart Grid
[Institute of Electrical and Electronics Engineers]
日期:2023-05-01
卷期号:14 (3): 2367-2376
被引量:19
标识
DOI:10.1109/tsg.2022.3217060
摘要
The network attack detection model based on machine learning (ML) has received extensive attention and research in PMU measurement data protection of power systems. However, well-trained ML-based detection models are vulnerable to adversarial attacks. By adding meticulously designed perturbations to the original data, the attacker can significantly decrease the accuracy and reliability of the model, causing the control center to receive unreliable PMU measurement data. This paper takes the network attack detection model in the power system as a case study to analyze the vulnerability of the ML-based detection model under adversarial attacks. And then, a mitigation strategy for adversarial attacks based on causal theory is proposed, which can enhance the robustness of the detection model under different adversarial attack scenarios. Unlike adversarial training, this mitigation strategy does not require adversarial samples to train models, saving computing resources. Furthermore, the strategy only needs a small amount of detection model information and can be migrated to various models. Simulation experiments on the IEEE node systems verify the threat of adversarial attacks against different ML-based detection models and the effectiveness of the proposed mitigation strategy.
科研通智能强力驱动
Strongly Powered by AbleSci AI