论文标题

Mixnn:一种用于保护深度学习模型的设计

MixNN: A design for protecting deep learning models

论文作者

Liu, Chao, Chen, Hao, Wu, Yusen, Jin, Rui

论文摘要

在本文中,我们提出了一种称为Mixnn的新型设计,用于保护深度学习模型结构和参数。 Mixnn深度学习模型中的层是完全分散的。它使用混合网络中的想法隐藏了通信地址,层参数和操作以及向后的消息流以及向后的消息流。 Mixnn具有以下优点:1)对手无法完全控制模型的所有层,包括结构和参数,2)即使是某些层也可能会碰撞,但它们不能与其他诚实层篡改,3)3)模型隐私在训练阶段保留。我们提供部署的详细说明。在一个分类实验中,我们使用AWS EC2上的MixNN设计比较了在虚拟机中部署的神经网络。结果表明,我们的MixNN在分类准确性方面保持差异不到0.001,而MixNN的整个运行时间比在单个虚拟机上运行的频率慢约7.5倍。

In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides communication address, layer parameters and operations, and forward as well as backward message flows among non-adjacent layers using the ideas from mix networks. MixNN has following advantages: 1) an adversary cannot fully control all layers of a model including the structure and parameters, 2) even some layers may collude but they cannot tamper with other honest layers, 3) model privacy is preserved in the training phase. We provide detailed descriptions for deployment. In one classification experiment, we compared a neural network deployed in a virtual machine with the same one using the MixNN design on the AWS EC2. The result shows that our MixNN retains less than 0.001 difference in terms of classification accuracy, while the whole running time of MixNN is about 7.5 times slower than the one running on a single virtual machine.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源