计算机科学
联合学习
计算
主流
透视图(图形)
原始数据
数据科学
协作学习
机器学习
人工智能
知识管理
算法
神学
哲学
程序设计语言
作者
Fengxia Liu,Zhiming Zheng,Yexuan Shi,Yongxin Tong,Yi Zhang
标识
DOI:10.1007/s11704-023-3282-7
摘要
Abstract Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets. To enhance privacy in federated learning, multi-party computation can be leveraged for secure communication and computation during model training. This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy, as well as the corresponding optimization techniques to improve model accuracy and training efficiency. We also pinpoint future directions to deploy federated learning to a wider range of applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI