计算机科学
鉴定(生物学)
联合学习
骨料(复合)
质量(理念)
数据挖掘
机器学习
罗伊特
人工智能
植物
生物
认识论
哲学
复合材料
材料科学
作者
Leiming Chen,Weishan Zhang,Cihao Dong,Dehai Zhao,Xingjie Zeng,Sibo Qiao,Yichang Zhu,Chee Wei Tan
出处
期刊:Entropy
[MDPI AG]
日期:2024-01-22
卷期号:26 (1): 96-96
被引量:2
摘要
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions.
科研通智能强力驱动
Strongly Powered by AbleSci AI