Federated learning enables cooperative computation between multiple participants while protecting user privacy. Currently, federated learning algorithms assume that all participants are trustworthy and their systems are secure. However, the following problems arise in real-world scenarios: (1) Malicious clients disrupt federated learning through model poisoning and data poisoning attacks. Although some research has proposed secure aggregation methods to solve this problem, most methods have limitations. (2) Due to the variance in data quality and computational resources among participants, rewards cannot be distributed equally. Some clients also exhibit free-rider behavior, seeking to cheat the reward system and manipulate global models. Evaluating client contribution and distributing rewards also present challenges.
To address these challenges, we design a trustworthy federated framework to ensure secure computing throughout the federated task process. First, we propose a malicious model detection method for secure model aggregation. Then, we also propose a fair method of assessing contribution to identify client-side free-riding behavior. Lastly, we develop a computation process grounded in blockchain and smart contracts to guarantee the trustworthiness and fairness of federated tasks. To validate the performance of our framework, we simulate different types of client attacks and contribution evaluation scenarios on several open-source datasets. The experiments show that our framework guarantees the federated task's credibility and achieves fair client contribution evaluation.