Differential Privacy Defenses and Sampling Attacks for Membership Inference
差别隐私
计算机科学
推论
计算机安全
采样(信号处理)
互联网隐私
差速器(机械装置)
作者
Shadi Rahimian,Tribhuvanesh Orekondy,Mario Fritz
标识
DOI:10.1145/3474369.3486876
摘要
Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.