计算机科学
一般化
匹配(统计)
领域(数学分析)
特征(语言学)
代表(政治)
人工智能
班级(哲学)
机器学习
对抗制
特征学习
数据挖掘
数学
政治
数学分析
哲学
语言学
统计
政治学
法学
作者
Liling Zhang,Xinyu Lei,Yichun Shi,Hongyu Huang,Chao Chen
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:12
标识
DOI:10.48550/arxiv.2111.10487
摘要
Federated Learning (FL) enables a group of clients to jointly train a machine learning model with the help of a centralized server. Clients do not need to submit their local data to the server during training, and hence the local training data of clients is protected. In FL, distributed clients collect their local data independently, so the dataset of each client may naturally form a distinct source domain. In practice, the model trained over multiple source domains may have poor generalization performance on unseen target domains. To address this issue, we propose FedADG to equip federated learning with domain generalization capability. FedADG employs the federated adversarial learning approach to measure and align the distributions among different source domains via matching each distribution to a reference distribution. The reference distribution is adaptively generated (by accommodating all source domains) to minimize the domain shift distance during alignment. In FedADG, the alignment is fine-grained since each class is aligned independently. In this way, the learned feature representation is supposed to be universal, so it can generalize well on the unseen domains. Intensive experiments on various datasets demonstrate that FedADG has comparable performance with the state-of-the-art.
科研通智能强力驱动
Strongly Powered by AbleSci AI