个性化
计算机科学
一般化
稳健性(进化)
妥协
机器学习
人工智能
万维网
数学分析
数学
社会科学
生物化学
化学
社会学
基因
作者
Xiongtao Zhang,Ji Wang,Weidong Bao,Yaohong Zhang,Xiaomin Zhu,Hao Peng,Xiang Zhao
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-14
标识
DOI:10.1109/tnnls.2024.3405190
摘要
Conventional federated learning (FL) assumes the homogeneity of models, necessitating clients to expose their model parameters to enhance the performance of the server model. However, this assumption cannot reflect real-world scenarios. Sharing models and parameters raises security concerns for users, and solely focusing on the server-side model neglects clients' personalization requirements, potentially impeding expected performance improvements of users. On the other hand, prioritizing personalization may compromise the generalization of the server model, thereby hindering extensive knowledge migration. To address these challenges, we put forth an important problem: How can FL ensure both generalization and personalization when clients' models are heterogeneous? In this work, we introduce FedTED, which leverages a twin-branch structure and data-free knowledge distillation (DFKD) to address the challenges posed by model heterogeneity and diverse objectives in FL. The employed techniques in FedTED yield significant improvements in both personalization and generalization, while effectively coordinating the updating process of clients' heterogeneous models and successfully reconstructing a satisfactory global model. Our empirical evaluation demonstrates that FedTED outperforms many representative algorithms, particularly in scenarios where clients' models are heterogeneous, achieving a remarkable 19.37% enhancement in generalization performance and up to 9.76% improvement in personalization performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI