计算机科学
同态加密
加密
杠杆(统计)
审计
密码学
数据质量
信息隐私
数据挖掘
数据共享
计算机安全
机器学习
医学
公制(单位)
运营管理
替代医学
病理
经济
管理
作者
Zhe Sun,Junping Wan,Lihua Yin,Zhiqiang Cao,Tianjie Luo,Bin Wang
标识
DOI:10.1016/j.dcan.2022.05.006
摘要
The development of data-driven artificial intelligence technology has given birth to a variety of big data applications. Data has become an essential factor to improve these applications. Federated learning, a privacy-preserving machine learning method, is proposed to leverage data from different data owners. It is typically used in conjunction with cryptographic methods, in which data owners train the global model by sharing encrypted model updates. However, data encryption makes it difficult to identify the quality of these model updates. Malicious data owners may launch attacks such as data poisoning and free-riding. To defend against such attacks, it is necessary to find an approach to audit encrypted model updates. In this paper, we propose a blockchain-based audit approach for encrypted gradients. It uses a behavior chain to record the encrypted gradients from data owners, and an audit chain to evaluate the gradients' quality. Specifically, we propose a privacy-preserving homomorphic noise mechanism in which the noise of each gradient sums to zero after aggregation, ensuring the availability of aggregated gradient. In addition, we design a joint audit algorithm that can locate malicious data owners without decrypting individual gradients. Through security analysis and experimental evaluation, we demonstrate that our approach can defend against malicious gradient attacks in federated learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI