差别隐私
计算机科学
上传
私人信息检索
接头(建筑物)
信息隐私
数据共享
隐私保护
差速器(机械装置)
联合学习
光学(聚焦)
子空间拓扑
计算机安全
数据挖掘
人工智能
万维网
工程类
光学
物理
医学
病理
航空航天工程
建筑工程
替代医学
作者
Junxu Liu,Jian Lou,Li Xiong,Jinfei Liu,Xiaofeng Meng
出处
期刊:Proceedings of the VLDB Endowment
[VLDB Endowment]
日期:2021-12-01
卷期号:15 (4): 828-840
被引量:14
标识
DOI:10.14778/3503585.3503592
摘要
Federated Learning (FL) is a promising framework for multiple clients to learn a joint model without directly sharing the data. In addition to high utility of the joint model, rigorous privacy protection of the data and communication efficiency are important design goals. Many existing efforts achieve rigorous privacy by ensuring differential privacy for intermediate model parameters, however, they assume a uniform privacy parameter for all the clients. In practice, different clients may have different privacy requirements due to varying policies or preferences. In this paper, we focus on explicitly modeling and leveraging the heterogeneous privacy requirements of different clients and study how to optimize utility for the joint model while minimizing communication cost. As differentially private perturbations affect the model utility, a natural idea is to make better use of information submitted by the clients with higher privacy budgets (referred to as "public" clients, and the opposite as "private" clients). The challenge is how to use such information without biasing the joint model. We propose <u>P</u> rojected <u>F</u> ederated <u>A</u> veraging (PFA), which extracts the top singular subspace of the model updates submitted by "public" clients and utilizes them to project the model updates of "private" clients before aggregating them. We then propose communication-efficient PFA+, which allows "private" clients to upload projected model updates instead of original ones. Our experiments verify the utility boost of both algorithms compared to the baseline methods, whereby PFA+ achieves over 99% uplink communication reduction for "private" clients.
科研通智能强力驱动
Strongly Powered by AbleSci AI