UEFL: Universal and Efficient Privacy-Preserving Federated Learning
计算机科学
信息隐私
差别隐私
计算机安全
计算机网络
互联网隐私
理论计算机科学
数据挖掘
作者
Zhiqiang Li,Haiyong Bao,Hao Pan,Menghong Guan,Cheng Huang,Hong‐Ning Dai
出处
期刊:IEEE Internet of Things Journal [Institute of Electrical and Electronics Engineers] 日期:2025-01-01卷期号:: 1-1
标识
DOI:10.1109/jiot.2025.3525731
摘要
Federated Learning (FL) is a distributed machine learning framework that allows for model training across multiple clients without requiring access to their local data. However, FL poses some risks, for example, curious clients might conduct inference attacks (e.g., membership inference attacks, model-inversion attacks) to extract sensitive information from other participants. Existing solutions typically fail to strike a good balance between performance and privacy, or are only applicable to specific FL scenarios. To address these challenges, we propose a universal and efficient privacy-preserving FL framework based on matrix theory. Specifically, we design the Improved Extended Hill Cryptosystem (IEHC), which efficiently encrypts model parameters while supporting the secure ReLU function. To accommodate different training tasks, we design the Secure Loss Function Computation (SLFC) protocol, which computes derivatives of various loss functions while maintaining data privacy of both client and server. And we implement SLFC specifically for three classic loss functions, including MSE, Cross Entropy, and L1. Extensive experimental results demonstrate that our approach robustly defends against various inference attacks. Furthermore, model training experiments conducted in various FL scenarios indicate that our method shows significant advantages across most metrics.