计算机科学
联合学习
一般化
特征(语言学)
人工智能
机器学习
趋同(经济学)
独立同分布随机变量
数据挖掘
语言学
经济增长
统计
随机变量
数学分析
哲学
经济
数学
作者
Youxin Huang,Shunzhi Zhu,Weizhe Chen,Zhicai Huang
标识
DOI:10.1016/j.comcom.2023.12.007
摘要
Federated learning is a distributed machine learning method where clients train models on local data to ensure that data will not be transmitted to a central server, providing unique advantages in privacy protection. However, in real-world scenarios, data between different clients may be non-Independently and Identically Distributed (non-IID) and imbalanced, leading to discrepancies among local models and impacting the efficacy of global model aggregation. To tackle this issue, this paper proposes a novel framework, FedARF, designed to improve Federated Learning performance by adaptively reconstructing local features during training. FedARF offers a simple reconstruction module for aligning feature representations from various clients, thereby enhancing the generalization capability of cross-client aggregated models. Additionally, to better adapt the model to each client's data distribution, FedARF employs an adaptive feature fusion strategy for a more effective blending of global and local model information, augmenting the model's accuracy and generalization performance. Experimental results demonstrate that our proposed Federated Learning method significantly outperforms existing methods in variety image classification tasks, achieving faster model convergence and superior performance when dealing with non-IID data distributions.
科研通智能强力驱动
Strongly Powered by AbleSci AI