对抗制
计算机科学
选择(遗传算法)
功能(生物学)
方案(数学)
计算机安全
计算机网络
人工智能
进化生物学
生物
数学分析
数学
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:12: 96051-96062
标识
DOI:10.1109/access.2024.3426534
摘要
Federated learning (FL) is a deep learning paradigm that allows clients to train deep learning models distributively, keeping raw data local rather than sending it to the cloud, thereby reducing security and privacy concerns. Although FL is designed to be inherently secure, it still has many vulnerabilities. In this paper, we consider an FL scenario where clients are subjected to an adversarial attack that exploits vulnerabilities in the decision-making process of deep learning models to induce misclassification. We observed that adversarial training has a trade-off relationship in which, as classification performance for adversarial examples increases, classification performance for normal samples decreases. To effectively utilize this trade-off relationship in adversarial training, we propose an adaptive selection scheme of the loss function depending on whether the FL client is attacked. The proposed scheme was experimentally proven to achieve the best robust accuracy while minimizing the decrease in natural accuracy. Further, we combined the proposed scheme with Byzantine-robust aggregation. We expected model training to converge stably because Byzantine-robust aggregation prevents highly distorted models from being aggregated, but we obtained experimental results that were contrary to our expectations.
科研通智能强力驱动
Strongly Powered by AbleSci AI