联合学习
计算机科学
稳健性(进化)
异常检测
深度学习
人工智能
人气
机器学习
建筑
对抗制
计算机安全
心理学
社会心理学
艺术
生物化学
化学
视觉艺术
基因
作者
Karthik Shenoy K,M. M. Manohara Pai,Radhika M. Pai
标识
DOI:10.1109/conit59222.2023.10205848
摘要
Federated learning is a decentralized approach to machine learning that has increased in popularity in recent years. It enables several participants to train a common model without revealing their data. This strategy is, however, susceptible to attacks from malicious clients who can launch targeted model poisoning attacks and reduce learning performance by delivering false model updates to the server. It is necessary to identify and eliminate such fraudulent updates and the attackers behind them to preserve the robustness and security of the shared model. In this paper, a novel Siamese network-based architecture for robust federated learning is proposed. which can identify and eliminate harmful updates. Our method is assessed and compared with other approaches for adversarial detection in image classification tasks in a federated setting, using a CNN model. Experimental findings demonstrate that the system offers reliable federated learning that is resistant to both targeted model poisoning and untargeted Byzantine attacks. Overall, the study advances the development of secure federated learning systems, by suggesting a novel method for identifying and deleting fraudulent updates in federated learning,
科研通智能强力驱动
Strongly Powered by AbleSci AI