计算机科学
概化理论
嵌入
一致性(知识库)
构造(python库)
领域(数学分析)
正规化(语言学)
理论计算机科学
人工智能
数学
计算机网络
数学分析
统计
作者
Wenke Huang,Mang Ye,Zekun Shi,Li He,Bo Du
标识
DOI:10.1109/cvpr52729.2023.01565
摘要
Federated learning shows a bright promise as a privacy-preserving collaborative learning technique. However, prevalent solutions mainly focus on all private data sampled from the same domain. An important challenge is that when distributed data are derived from diverse domains. The private model presents degenerative performance on other domains (with domain shift). Therefore, we expect that the global model optimized after the federated learning process stably provides generalizability performance on multiple domains. In this paper, we propose Federated Proto-types Learning (FPL) for federated learning under domain shift. The core idea is to construct cluster prototypes and unbiased prototypes, providing fruitful domain knowledge and a fair convergent target. On the one hand, we pull the sample embedding closer to cluster prototypes belonging to the same semantics than cluster prototypes from distinct classes. On the other hand, we introduce consistency regularization to align the local instance with the respective unbiased prototype. Empirical results on Digits and Office Caltech tasks demonstrate the effectiveness of the proposed solution and the efficiency of crucial modules.
科研通智能强力驱动
Strongly Powered by AbleSci AI