计算机科学
差别隐私
方案(数学)
稳健性(进化)
过度拟合
人工智能
分布式学习
机器学习
联合学习
适应性学习
过程(计算)
分布式计算
数据挖掘
人工神经网络
化学
数学分析
基因
操作系统
心理学
生物化学
数学
教育学
作者
Xiang Wu,Yongting Zhang,Minyu Shi,Pei Li,Ruirui Li,Naixue Xiong
标识
DOI:10.1016/j.future.2021.09.015
摘要
Driven by the upcoming development of the sixth-generation communication system (6G), the distributed machine learning schemes represented by federated learning has shown advantages in data utilization and multi-party cooperative model training. The total communication costs of federated learning is related to the number of communication rounds, the communication consumption of each participants, the setting of reasonable learning rate and the guarantee of calculation fairness. In addition, the isolating data strategy in the federated learning framework cannot completely guarantee the privacy security of users. Motivated by the above problems, this paper proposes a federated learning scheme combined with the adaptive gradient descent strategy and differential privacy mechanism, which is suitable for multi-party collaborative modeling scenarios. To ensure that federated learning scheme can train efficiently with limited communications costs, the adaptive learning rate algorithm is innovatively used to adjust the gradient descent process and avoid the model overfitting and fluctuation phenomena, so as to improve the modeling efficiency and performance in multi-party calculation scenarios. Furthermore, in order to adapt to the ultra-large-scale distributed secure computing scenario, this research introduces differential privacy mechanism to resist various background knowledge attacks. Experimental results demonstrate that the proposed adaptive federated learning model performs better than the traditional models under fixed communication costs. This novel modeling scheme also has strong robustness to different super-parameter settings and provides stronger quantifiable privacy preserving for federated learning process.
科研通智能强力驱动
Strongly Powered by AbleSci AI