论文标题
优化保护隐私的外包卷积神经网络预测
Optimizing Privacy-Preserving Outsourced Convolutional Neural Network Predictions
论文作者
论文摘要
卷积神经网络是一种广泛应用于各种预测任务的机器学习模型,例如计算机视觉和医学图像分析。他们的巨大预测能力需要广泛的计算,这鼓励模型所有者在云平台中托管预测服务。最近的研究着重于查询和结果的隐私,但它们没有为模型托管服务器提供模型隐私,并且可能会泄露有关结果的部分信息。其中一些进一步需要与更疑问或重型计算开销的频繁相互作用,这阻止了查询使用预测服务。本文提出了一个新的方案,用于外包设置中的隐私神经网络预测,即服务器无法学习查询,(中间)结果和模型。类似于提供模型隐私的代表性工作SecureMl(S&P'17),我们利用两个具有秘密共享和三重态生成的非批准服务器来最大程度地减少重量级加密的使用。此外,我们采用异步计算来改善吞吐量,并为非多项式激活函数设计乱码的电路,以保持与基础网络相同的准确性(而不是近似)。我们在MNIST数据集上的实验表明,与SecureML,Minionn(CCS'17)和EZPC(Euros&P'19)相比,我们的计划平均降低了122倍,14.63倍和36.69倍。对于沟通成本,我们的计划的表现平均比1.09倍,牛头量和36.69倍,而ezpc平均比31.32倍。在CIFAR数据集上,与Minionn和EZPC相比,我们的方案分别达到7.14倍和3.48倍的较低延迟。与CIFAR数据集上的Minionn和EZPC相比,我们的计划还提供13.88倍和77.46倍的通信成本。
Convolutional neural network is a machine-learning model widely applied in various prediction tasks, such as computer vision and medical image analysis. Their great predictive power requires extensive computation, which encourages model owners to host the prediction service in a cloud platform. Recent researches focus on the privacy of the query and results, but they do not provide model privacy against the model-hosting server and may leak partial information about the results. Some of them further require frequent interactions with the querier or heavy computation overheads, which discourages querier from using the prediction service. This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting, i.e., the server cannot learn the query, (intermediate) results, and the model. Similar to SecureML (S&P'17), a representative work that provides model privacy, we leverage two non-colluding servers with secret sharing and triplet generation to minimize the usage of heavyweight cryptography. Further, we adopt asynchronous computation to improve the throughput, and design garbled circuits for the non-polynomial activation function to keep the same accuracy as the underlying network (instead of approximating it). Our experiments on MNIST dataset show that our scheme achieves an average of 122x, 14.63x, and 36.69x reduction in latency compared to SecureML, MiniONN (CCS'17), and EzPC (EuroS&P'19), respectively. For the communication costs, our scheme outperforms SecureML by 1.09x, MiniONN by 36.69x, and EzPC by 31.32x on average. On the CIFAR dataset, our scheme achieves a lower latency by a factor of 7.14x and 3.48x compared to MiniONN and EzPC, respectively. Our scheme also provides 13.88x and 77.46x lower communication costs than MiniONN and EzPC on the CIFAR dataset.