差别隐私
计算机科学
联合学习
可信赖性
分析
机器学习
信息隐私
时间序列
私人信息检索
人工智能
数据挖掘
隐私保护
数据建模
计算机安全
数据库
作者
Chenxi Huang,Chaoyang Jiang,Zhenghua Chen
标识
DOI:10.1109/iciea58696.2023.10241529
摘要
Trustworthy federated learning aims to achieve optimal performance while ensuring clients’ privacy. Existing privacy-preserving federated learning approaches are mostly tailored for image data, lacking applications for time series data, which have many important applications, like machine health monitoring, human activity recognition, etc. Furthermore, protective noising on a time series data analytics model can significantly interfere with temporal-dependent learning, leading to a greater decline in accuracy. To address these issues, we develop a privacy-preserving federated learning algorithm for time series data. Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients. We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy. Extensive experiments were conducted on five time series datasets. The evaluation results reveal that our algorithm experienced minimal accuracy loss compared to non-private federated learning in both small and large client scenarios. Under the same level of privacy protection, our algorithm demonstrated improved accuracy compared to the centralized differentially private federated learning in both scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI