可验证秘密共享
计算机科学
正确性
推论
MNIST数据库
云计算
深度学习
人工智能
秘密分享
安全多方计算
机器学习
架空(工程)
分布式计算
理论计算机科学
数据挖掘
计算机安全
密码学
算法
程序设计语言
集合(抽象数据类型)
操作系统
作者
Jia Duan,Jiantao Zhou,Yuanman Li,Caishi Huang
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2022-01-21
卷期号:483: 221-234
被引量:20
标识
DOI:10.1016/j.neucom.2022.01.061
摘要
Deep learning inference, providing the model utilization of deep learning, is usually deployed as a cloud-based framework for the resource-constrained client. However, the existing cloud-based frameworks suffer from severe information leakage or lead to significant increase of communication cost. In this work, we address the problem of privacy-preserving deep learning inference in a way that both the privacy of the input data and the model parameters can be protected with low communication and computational costs. Additionally, the user can verify the correctness of results with small overhead, which is very important for critical application. Specifically, by designing secure sub-protocols, we introduce a new layer to collaboratively perform the secure computations involved in the inference. With the cooperation of the secret sharing, we inject the verifiable data into the input, enabling us to check the correctness of the returned inference results. Theoretical analyses and extensive experimental results over MNIST and CIFAR10 datasets are provided to validate the superiority of our proposed privacy-preserving and verifiable deep learning inference (PVDLI) framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI