计算机科学
联合学习
同态加密
差别隐私
激励
块链
超参数
分布式学习
信息隐私
遗忘
加密
分布式计算
人工智能
计算机安全
数据挖掘
语言学
哲学
教育学
经济
微观经济学
心理学
作者
Swaraj Kumar,Sandipan Dutta,Shaurya Chatturvedi,M. P. S. Bhatia
标识
DOI:10.1109/bigmm50055.2020.00058
摘要
Several recent advances in Federated Learning have made it possible for researchers to train their models on private data present on contributing devices without compromising their privacy. In this paradigm, each contributor's local updates are aggregated and averaged to update the global model. In this paper, we introduce a secure and decentralized training for distributed data. In order to develop an efficient decentralized system, blockchain technology is introduced via Ethereum, which enables us to create a value-driven incentive mechanism. This is done to encourage the contributors to positively affect the learning of the global model. We provide an enhanced security mechanism by implementing differential privacy and homomorphic encryption. The performance of the global model has been significantly boosted by implementing Elastic Weight Consolidation, which prevents Catastrophic forgetting, a scenario where the model learns only on new data and forgets its previous learnings. It proves essential in distributed training since the model is being trained on a spectrum of data, often present in clusters on each contributor's device. We introduce an innovative way of using hyperparameter optimization in federated learning with the help of Hyperopt and deposit based reward mechanism. Experiments verify the capability of the novel strategies incorporated in our system.
科研通智能强力驱动
Strongly Powered by AbleSci AI