差别隐私
计算机科学
推论
正规化(语言学)
随机梯度下降算法
隐私保护
人工智能
信息隐私
最优化问题
机器学习
数学优化
计算机安全
数据挖掘
人工神经网络
算法
数学
作者
Eugenio Lomurno,Matteo Matteucci
标识
DOI:10.1007/978-3-031-25599-1_17
摘要
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation rules of their training data, usually crowd-sourced and retaining sensitive information. The most widely adopted method to enforce privacy guarantees of a deep learning model nowadays relies on optimization techniques enforcing differential privacy. According to the literature, this approach has proven to be a successful defence against several models’ privacy attacks, but its downside is a substantial degradation of the models’ performance. In this work, we compare the effectiveness of the differentially-private stochastic gradient descent (DP-SGD) algorithm against standard optimization practices with regularization techniques. We analyze the resulting models’ utility, training performance, and the effectiveness of membership inference and model inversion attacks against the learned models. Finally, we discuss differential privacy’s flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
科研通智能强力驱动
Strongly Powered by AbleSci AI