计算机科学
吞吐量
分布式计算
声誉
机制(生物学)
块链
激励
计算机安全
无线
电信
社会科学
哲学
认识论
社会学
经济
微观经济学
作者
Haibo Wang,Hongwei Gao,Teng Ma,Chong Li,Jing Tao
标识
DOI:10.1016/j.dcan.2024.07.002
摘要
Distributed Federated Learning (DFL) technology enables participants to cooperatively train a shared model while preserving the privacy of their local data sets, making it a desirable solution for decentralized and privacy-preserving Web3 scenarios. However, DFL faces incentive and security challenges in the decentralized framework. To address these issues, this paper presents a Hierarchical Blockchain-enabled DFL (HBDFL) system, which provides a generic solution framework for the DFL-related applications. The proposed system consists of four major components, including a model contribution-based reward mechanism, a Proof of Elapsed Time and Accuracy (PoETA) consensus algorithm, a Distributed Reputation-based Verification Mechanism (DRTM) and an Accuracy-Dependent Throughput Management (ADTM) mechanism. The model contribution-based rewarding mechanism incentivizes network nodes to train models with their local datasets, while the PoETA consensus algorithm optimizes the tradeoff between the shared model accuracy and system throughput. The DRTM improves the system efficiency in consensus, and the ADTM mechanism guarantees that the throughput performance remains within a predefined range while improving the shared model accuracy. The performance of the proposed HBDFL system is evaluated by numerical simulations, which show that the system improves the accuracy of the shared model while maintaining high throughput and ensuring security.
科研通智能强力驱动
Strongly Powered by AbleSci AI