差别隐私
机器学习
计算机科学
人工智能
大数据
信息隐私
主流
深度学习
保密
数据科学
数据挖掘
计算机安全
哲学
神学
作者
Maoguo Gong,Yu Xie,Ke Pan,Kaiyuan Feng,A. K. Qin
出处
期刊:IEEE Computational Intelligence Magazine
[Institute of Electrical and Electronics Engineers]
日期:2020-04-14
卷期号:15 (2): 49-64
被引量:85
标识
DOI:10.1109/mci.2020.2976185
摘要
Recent years have witnessed remarkable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream privacy- preserving techniques, differential privacy provides a promising way to prevent the leaking of individual-level privacy in training data while preserving the quality of training data for model building. This work provides a comprehensive survey on the existing works that incorporate differential privacy with machine learning, so- called differentially private machine learning and categorizes them into two broad categories as per different differential privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mechanism. In the former, a calibrated amount of noise is added to the non-private model and in the latter, the output or the objective function is perturbed by random noise. Particularly, the survey covers the techniques of differentially private deep learning to alleviate the recent concerns about the privacy of big data contributors. In addition, the research challenges in terms of model utility, privacy level and applications are discussed. To tackle these challenges, several potential future research directions for differentially private machine learning are pointed out.
科研通智能强力驱动
Strongly Powered by AbleSci AI