计算机科学
联合学习
语言变化
数据传输
数据建模
分布式计算
机器学习
人工智能
数据库
计算机网络
艺术
文学类
作者
Xiuwen Fang,Mang Ye,Xiyuan Yang
标识
DOI:10.1109/iccv51070.2023.00463
摘要
Model heterogeneous federated learning is a realistic and challenging problem. However, due to the limitations of data collection, storage, and transmission conditions, as well as the existence of free-rider participants, the clients may suffer from data corruption. This paper starts the first attempt to investigate the problem of data corruption in the model heterogeneous federated learning framework. We design a novel method named Augmented Heterogeneous Federated Learning (AugHFL), which consists of two stages: 1) In the local update stage, a corruption-robust data augmentation strategy is adopted to minimize the adverse effects of local corruption while enabling the models to learn rich local knowledge. 2) In the collaborative update stage, we design a robust re-weighted communication approach, which implements communication between heterogeneous models while mitigating corrupted knowledge transfer from others. Extensive experiments demonstrate the effectiveness of our method in coping with various corruption patterns in the model heterogeneous federated learning setting.
科研通智能强力驱动
Strongly Powered by AbleSci AI