同态加密
计算机科学
加密
差别隐私
计算机安全
信息隐私
隐私软件
数据挖掘
作者
Elnaz Rabieinejad,Abbas Yazdinejad,Ali Dehghantanha,Gautam Srivastava
出处
期刊:IEEE Transactions on Consumer Electronics
[Institute of Electrical and Electronics Engineers]
日期:2024-01-03
卷期号:70 (1): 4258-4265
被引量:4
标识
DOI:10.1109/tce.2024.3349490
摘要
As the adoption of Consumer Internet of Things (CIoT) devices surges, so do concerns about security vulnerabilities and privacy breaches. Given their integration into daily life and data collection capabilities, it is crucial to safeguard user privacy against unauthorized access and potential leaks proactively. Federated learning, an advanced machine learning, provides a promising solution by inherently prioritizing privacy, circumventing the need for centralized data collection, and bolstering security. Yet, federated learning opens up avenues for adversaries to extract critical information from the machine learning model through data leakage and model inference attacks targeted at the central server. In response to this particular concern, we present an innovative two-level privacy-preserving framework in this paper. This framework synergistically combines federated learning with partially homomorphic encryption, which we favor over other methods such as fully homomorphic encryption and differential privacy. Our preference for partially homomorphic encryption is based on its superior balance between computational efficiency and model performance. This advantage becomes particularly relevant when considering the intense computational demands of fully homomorphic encryption and the sacrifice to model accuracy often associated with differential privacy. Incorporating partially homomorphic encryption augments federated learning's privacy assurance, introducing an additional protective layer. The fundamental properties of partially homomorphic encryption enable the central server to aggregate and compute operations on the encrypted local models without decryption, thereby preserving sensitive information from potential exposures. Empirical results substantiate the efficacy of the proposed framework, which significantly ameliorates attack prediction error rates and reduces false alarms compared to conventional methods. Moreover, through security analysis, we prove our proposed framework's enhanced privacy compared to existing methods that deploy federated learning for attack detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI