计算机科学
初始化
压缩传感
水准点(测量)
压缩比
噪音(视频)
降噪
数据压缩
随机梯度下降算法
人工智能
人工神经网络
图像(数学)
内燃机
大地测量学
地理
程序设计语言
汽车工程
工程类
作者
Haomiao Yang,Mengyu Ge,Kunlan Xiang,Jingwei Li
标识
DOI:10.1109/tifs.2022.3227761
摘要
Federated learning (FL) preserves data privacy by exchanging gradients instead of local training data. However, these private data can still be reconstructed from the exchanged gradients. Deep leakage from gradients (DLG) is a classical reconstruction attack that optimizes dummy data to real data by making the corresponding dummy and real gradients as similar as possible. Nevertheless, DLG fails with highly compressed gradients, which are crucial for communication-efficient FL. In this study, we propose an effective data reconstruction attack against highly compressed gradients, called highly compressed gradient leakage attack (HCGLA). In particular, HCGLA is characterized by the following three key techniques: 1) Owing to the unreasonable optimization objective of DLG in compression scenarios, we redesign a plausible objective function, ensuring that compressed dummy gradients are similar to the compressed real gradients. 2) Instead of simply initializing dummy data through random noise, as in DLG, we design a novel dummy data initialization method, Init-Generation, to compensate for information loss caused by gradient compression. 3) To further enhance reconstruction quality, we train an ad hoc denoising model using the methods of “first optimizing, next filtering, and then reoptimizing”. Extensive experiments on various benchmark data sets and mainstream models show that HCGLA is an effective reconstruction attack even against highly compressed gradients of 0.1%, whereas state-of-the-art attacks can only support 70% compression, thereby achieving a 700-fold improvement.
科研通智能强力驱动
Strongly Powered by AbleSci AI