论文标题

使用潜在空间中毒产生分配对抗攻击

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

论文作者

Upadhyay, Ujjwal, Mukherjee, Prerana

论文摘要

传统的对抗攻击依赖于网络梯度产生的扰动,这些梯度通常通过梯度引导搜索保护,以提供网络的对抗性对应物。在本文中,我们提出了一种新的机制,即产生对抗性示例,其中实际图像没有损坏,而是利用其潜在的空间表示来篡改图像的固有结构,同时保持感知质量完整并充当合法的数据示例。与基于梯度的攻击相反,潜在空间中毒利用了分类器的倾向来对训练数据集的独立和相同的分布进行建模,并通过产生分配样本来欺骗它。我们训练一个分离的变分自动编码器(beta-vae)来对潜在空间中的数据进行建模,然后在潜在空间中使用类调节的分布函数添加噪声扰动,这是在限制下将其错误分类到目标标签的约束。我们对MNIST,SVHN和CELEBA数据集的经验结果验证了生成的对抗示例很容易欺骗强大的L_0,L_2,L_INF Norm分类器,使用可证明强大的防御机制设计。

Traditional adversarial attacks rely upon the perturbations generated by gradients from the network which are generally safeguarded by gradient guided search to provide an adversarial counterpart to the network. In this paper, we propose a novel mechanism of generating adversarial examples where the actual image is not corrupted rather its latent space representation is utilized to tamper with the inherent structure of the image while maintaining the perceptual quality intact and to act as legitimate data samples. As opposed to gradient-based attacks, the latent space poisoning exploits the inclination of classifiers to model the independent and identical distribution of the training dataset and tricks it by producing out of distribution samples. We train a disentangled variational autoencoder (beta-VAE) to model the data in latent space and then we add noise perturbations using a class-conditioned distribution function to the latent space under the constraint that it is misclassified to the target label. Our empirical results on MNIST, SVHN, and CelebA dataset validate that the generated adversarial examples can easily fool robust l_0, l_2, l_inf norm classifiers designed using provably robust defense mechanisms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源