计算机科学
梯度下降
差别隐私
杠杆(统计)
随机梯度下降算法
深度学习
缩小
最优化问题
噪音(视频)
数学优化
数据挖掘
人工智能
算法
人工神经网络
数学
图像(数学)
程序设计语言
作者
Lin Chen,Danyang Yue,Xiuli Ding,Zuan Wang,Kim‐Kwang Raymond Choo,Hai Jin
标识
DOI:10.1109/tifs.2023.3293961
摘要
Deep learning (DL) has been adopted in a broad range of Internet-of-Things (IoT) applications such as auto-driving, intelligent healthcare and smart grids, but limitations such as those relating to data and user privacy can complicate its broader implementation. Seeking to jointly address both privacy and utility, in this paper we connect the layer-wise relevance propagation with gradient descent for injecting proper noise into gradients. We also improve the conventional gradient clipping method by dividing the gradients into several groups; thus, minimizing the gradient distortion. Since the noisy gradient causes the undetermined descent direction and might adversely affect the loss minimization, we use the NoisyMin algorithm to select the best step size for each gradient perturbation. Finally, we integrate the adaptive optimizer into the gradient descent. In addition to improving the model utility, we also leverage the leading Sinh-Normal noise addition mechanism to achieve truncated concentrated differential privacy (tCDP) – as demonstrated by our rigorous analysis. Our experimental evaluations also validate the effectiveness of the proposed algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI