泄漏(经济)
计算机科学
人工智能
宏观经济学
经济
作者
Haomiao Yang,Dongyun Xue,Mengyu Ge,Jingwei Li,Guowen Xu,Hongwei Li,Rongxing Lu
标识
DOI:10.1109/tdsc.2024.3387570
摘要
Federated learning (FL) is a distributed machine learning technique that guarantees the privacy of user data. However, FL has been shown to be vulnerable to gradient leakage attacks (GLA), which have the ability to reconstruct private training data from public gradients with high probability. These attacks are either analytic-based, requiring modification of the FL model, or optimization-based, requiring long convergence times and failing to effectively address the challenge of dealing with highly compressed gradients in practical FL systems. This paper presents a pioneering generation-based GLA method called FGLA that can reconstruct batches of user data without the need for the optimization process. We specifically design a feature separation technique that first extracts the features of each sample in a batch and then directly generates the user data. Our extensive experiments on multiple image datasets show that FGLA can reconstruct user images in seconds with a batch size of 256 from highly compressed gradients (0.8% compression ratio or higher), thereby significantly outperforming state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI