计算机科学
不变(物理)
对抗制
建筑
图像(数学)
人工神经网络
协议(科学)
深层神经网络
人工智能
联合学习
序列(生物学)
分布式计算
计算机视觉
数据挖掘
计算机网络
理论计算机科学
医学
艺术
视觉艺术
物理
替代医学
病理
数学物理
生物
遗传学
作者
Jonas Geiping,Hartmut Bauermeister,Hannah Dröge,Michael Moeller
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:187
标识
DOI:10.48550/arxiv.2003.14053
摘要
The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared. But how secure is sharing parameter gradients? Previous attacks have provided a false sense of security, by succeeding only in contrived settings - even for a single image. However, by exploiting a magnitude-invariant loss along with optimization strategies based on adversarial attacks, we show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks. We analyze the effects of architecture as well as parameters on the difficulty of reconstructing an input image and prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture. Finally we discuss settings encountered in practice and show that even averaging gradients over several iterations or several images does not protect the user's privacy in federated learning applications in computer vision.
科研通智能强力驱动
Strongly Powered by AbleSci AI