联合学习
计算机科学
差别隐私
分割
数据共享
编码(集合论)
深度学习
人工智能
训练集
机器学习
数据挖掘
医学
病理
集合(抽象数据类型)
程序设计语言
替代医学
作者
Wenqi Li,Fausto Milletarì,Daguang Xu,Nicola Rieke,Jonny Hancox,Wentao Zhu,Maximilian Baust,Yan Cheng,Sébastien Ourselin,M. Jorge Cardoso,Andrew Feng
标识
DOI:10.1007/978-3-030-32692-0_16
摘要
Due to medical data privacy regulations, it is often infeasible to collect and share patient data in a centralised data lake. This poses challenges for training machine learning algorithms, such as deep convolutional networks, which often require large numbers of diverse training examples. Federated learning sidesteps this difficulty by bringing code to the patient data owners and only sharing intermediate model training updates among them. Although a high-accuracy model could be achieved by appropriately aggregating these model updates, the model shared could indirectly leak the local training examples. In this paper, we investigate the feasibility of applying differential-privacy techniques to protect the patient data in a federated learning setup. We implement and evaluate practical federated learning systems for brain tumour segmentation on the BraTS dataset. The experimental results show that there is a trade-off between model performance and privacy protection costs.
科研通智能强力驱动
Strongly Powered by AbleSci AI