论文标题
一种令人尴尬的简单方法,用于在复发神经网络上保护知识产权保护
An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks
论文作者
论文摘要
利用深度学习模型,提供自然语言处理(NLP)解决方案作为机器学习的一部分作为服务(MLAAS)产生了丰富的收入。同时,众所周知,这些有利可图的深层模型的创造是无处不在的。因此,保护这些发明的知识产权(IPR)免受滥用,被盗和窃的侵害至关重要。本文提出了一种实用的方法,即在没有现有的知识产权解决方案的所有铃声和哨声的情况下,对复发性神经网络(RNN)进行保护。特别是,我们介绍了类似于RNN体系结构中的经常性质的守门概念,将其引入嵌入键。此外,我们以某种方式设计模型培训方案,以使受保护的RNN模型将保留其原始性能,如果提出了真正的密钥。广泛的实验表明,我们的保护方案在不同的RNN变体上的白盒和黑盒保护方案中都具有稳健性和有效性,以防止歧义和去除攻击。代码可从https://github.com/zhiqin1998/recurrentipr获得
Capitalise on deep learning models, offering Natural Language Processing (NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has generated handsome revenues. At the same time, it is known that the creation of these lucrative deep models is non-trivial. Therefore, protecting these inventions intellectual property rights (IPR) from being abused, stolen and plagiarized is vital. This paper proposes a practical approach for the IPR protection on recurrent neural networks (RNN) without all the bells and whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper concept that resembles the recurrent nature in RNN architecture to embed keys. Also, we design the model training scheme in a way such that the protected RNN model will retain its original performance iff a genuine key is presented. Extensive experiments showed that our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes on different RNN variants. Code is available at https://github.com/zhiqin1998/RecurrentIPR