论文标题

可学习性锁定:通过对抗可逆转换的授权学习能力控制

Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations

论文作者

Peng, Weiqi, Chen, Jinghui

论文摘要

由于信息技术的革命,深度学习的最新进展使得从各种数字格式可用的数据访问中获得了令人难以置信的好处。但是,在某些情况下,人们可能不希望将其数据用于培训商业模型,从而研究了如何攻击深度学习模型的可学习性。以前的有关可学习性攻击的工作仅考虑防止在特定数据集中未经授权剥削的目标,而不是恢复授权案例可学习性的过程。为了解决此问题,本文介绍并调查了一个名为“可学习性锁”的新概念,用于控制模型在特定数据集上使用特殊键的可学习性。特别是,我们提出了对抗性的可逆转换,可以将其视为从图像到图像的映射,以稍微修改数据样本,以使它们通过具有可忽略的视觉特征丢失的机器学习模型“无法获得”。同时,人们可以使用相应的密钥来解锁数据集和训练模型的可学习性。提出的可学习性锁定利用了类扰动,该障碍将通用转换函数应用于同一标签的数据样本。这样可以确保可以通过简单的逆变换轻松恢复学习性,同时难以检测或反向设计。我们从经验上证明了我们方法在视觉分类任务上的成功和实用性。

Owing much to the revolution of information technology, the recent progress of deep learning benefits incredibly from the vastly enhanced access to data available in various digital formats. However, in certain scenarios, people may not want their data being used for training commercial models and thus studied how to attack the learnability of deep learning models. Previous works on learnability attack only consider the goal of preventing unauthorized exploitation on the specific dataset but not the process of restoring the learnability for authorized cases. To tackle this issue, this paper introduces and investigates a new concept called "learnability lock" for controlling the model's learnability on a specific dataset with a special key. In particular, we propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to slightly modify data samples so that they become "unlearnable" by machine learning models with negligible loss of visual features. Meanwhile, one can unlock the learnability of the dataset and train models normally using the corresponding key. The proposed learnability lock leverages class-wise perturbation that applies a universal transformation function on data samples of the same label. This ensures that the learnability can be easily restored with a simple inverse transformation while remaining difficult to be detected or reverse-engineered. We empirically demonstrate the success and practicability of our method on visual classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源