计算机科学
理想(伦理)
随机梯度下降算法
人工智能
深度学习
机器学习
过程(计算)
梯度下降
最佳停车
平衡(能力)
人工神经网络
数学优化
数学
法学
心理学
操作系统
神经科学
政治学
作者
Tao Zhang,Tianqing Zhu,Kun Gao,Wanlei Zhou,Philip S. Yu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2021-12-08
卷期号:34 (9): 5557-5569
被引量:21
标识
DOI:10.1109/tnnls.2021.3129592
摘要
As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.
科研通智能强力驱动
Strongly Powered by AbleSci AI