计算机科学
MNIST数据库
计算
加密
密码学
安全多方计算
人工智能
安全通信
深度学习
机器学习
分布式计算
算法
计算机网络
作者
Ekanut Sotthiwat,Liangli Zhen,Zengxiang Li,Chi Zhang
标识
DOI:10.1109/ccgrid51090.2021.00101
摘要
Multi-party computation (MPC) allows distributed machine learning to be performed in a privacy-preserving manner so that end-hosts are unaware of the true models on the clients. However, the standard MPC algorithm also triggers additional communication and computation costs, due to those expensive cryptography operations and protocols. In this paper, instead of applying heavy MPC over the entire local models for secure model aggregation, we propose to encrypt critical part of model (gradients) parameters to reduce communication cost, while maintaining MPC's advantages on privacy-preserving without sacrificing accuracy of the learnt joint model. Theoretical analysis and experimental results are provided to verify that our proposed method could prevent deep leakage from gradients attacks from reconstructing original data of individual participants. Experiments using deep learning models over the MNIST and CIFAR-10 datasets empirically demonstrate that our proposed partially encrypted MPC method can reduce the communication and computation cost significantly when compared with conventional MPC, and it achieves as high accuracy as traditional distributed learning which aggregates local models using plain text.
科研通智能强力驱动
Strongly Powered by AbleSci AI