计算机科学
计算
安全多方计算
计算机安全
人工智能
理论计算机科学
程序设计语言
作者
Jingxue Chen,Hang Yan,Z. Liu,Min Zhang,Hu Xiong,Shui Yu
摘要
Nowadays, with the development of artificial intelligence (AI), privacy issues attract wide attention from society and individuals. It is desirable to make the data available but invisible, i.e., to realize data analysis and calculation without disclosing the data to unauthorized entities. Federated learning (FL) has emerged as a promising privacy-preserving computation method for AI. However, new privacy issues have arisen in FL-based application, because various inference attacks can still infer relevant information about the raw data from local models or gradients. This will directly lead to the privacy disclosure. Therefore, it is critical to resist these attacks to achieve complete privacy-preserving computation. In light of the overwhelming variety and a multitude of privacy-preserving computation protocols, we survey these protocols from a series of perspectives to supply better comprehension for researchers and scholars. Concretely, the classification of attacks is discussed, including four kinds of inference attacks as well as malicious server and poisoning attack. Besides, this article systematically captures the state-of-the-art of privacy-preserving computation protocols by analyzing the design rationale, reproducing the experiment of classic schemes, and evaluating all discussed protocols in terms of efficiency and security properties. Finally, this survey identifies a number of interesting future directions.
科研通智能强力驱动
Strongly Powered by AbleSci AI