论文标题

保护神经网络模型的提取

A Protection against the Extraction of Neural Network Models

论文作者

Chabanne, Hervé, Despiegel, Vincent, Guiga, Linda

论文摘要

给定Oracle访问神经网络(NN),可以提取其基础模型。在这里,我们通过添加寄生层来引入保护,从而使基础NN的预测大多保持不变,同时使反向工程的任务复杂化。我们的对策依赖于与卷积NN近似嘈杂的身份映射。我们解释了为什么引入新的寄生层会使攻击复杂化。我们报告了有关受保护NN的性能和准确性的实验。

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which keep the underlying NN's predictions mostly unchanged while complexifying the task of reverse-engineering. Our countermeasure relies on approximating a noisy identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源