计算机科学
分类
推论
再培训
信息泄露
背景(考古学)
联合学习
脆弱性(计算)
信息隐私
数据科学
计算机安全
知识管理
人工智能
政治学
法学
古生物学
生物
作者
Wang Fei,Baochun Li,Bo Li
出处
期刊:IEEE Network
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-7
被引量:3
标识
DOI:10.1109/mnet.004.2300056
摘要
Federated unlearning has emerged very recently as an attempt to realize "the right to be forgotten" in the context of federated learning. While the current literature is making efforts on designing efficient retraining or approximate unlearning approaches, they ignore the information leakage risks brought by the discrepancy between the models before and after unlearning. In this paper, we perform a comprehensive review of prior studies on federated unlearning and privacy leakage from model updating. We propose new taxonomies to categorize and summarize the state-of-the-art federated unlearning algorithms. We present our findings on the inherent vulnerability to inference attacks of the federated unlearning paradigm and summarize defense techniques with the potential of preventing information leakage. Finally, we suggest a privacy preserving federated unlearning framework with promising research directions to facilitate further studies as future work.
科研通智能强力驱动
Strongly Powered by AbleSci AI