论文标题

内存防御:通过内存掩盖自动编码器进行更强大的分类

Memory Defense: More Robust Classification via a Memory-Masking Autoencoder

论文作者

Adhikarla, Eashan, Luo, Dan, Davison, Brian D.

论文摘要

许多深层神经网络易受精心设计的图像的微小扰动,这些图像被精心制作而导致错误分类。理想情况下,强大的分类器将不受输入图像的微小变化的影响,因此创建了许多防御方法。一种方法是辨别潜在表示,该表示可以忽略输入的小更改。但是,典型的自动编码器在类之间有很强的相似之处时很容易混合阶层间的潜在表示,因此解码器更难将图像准确地将图像送回原始的高维空间。我们提出了一个新颖的框架,内存防御,一个带有内存掩盖自动编码器的增强分类器,以应对这一挑战。通过掩盖其他类,自动编码器学习了特定类别的独立潜在表示。我们测试了该模型对四次使用四次攻击的鲁棒性。关于时尚和CIFAR-10数据集的实验证明了我们模型的优势。我们在GitHub存储库中提供了我们的源代码:https://github.com/eashanadhikarla/memdefense

Many deep neural networks are susceptible to minute perturbations of images that have been carefully crafted to cause misclassification. Ideally, a robust classifier would be immune to small variations in input images, and a number of defensive approaches have been created as a result. One method would be to discern a latent representation which could ignore small changes to the input. However, typical autoencoders easily mingle inter-class latent representations when there are strong similarities between classes, making it harder for a decoder to accurately project the image back to the original high-dimensional space. We propose a novel framework, Memory Defense, an augmented classifier with a memory-masking autoencoder to counter this challenge. By masking other classes, the autoencoder learns class-specific independent latent representations. We test the model's robustness against four widely used attacks. Experiments on the Fashion-MNIST & CIFAR-10 datasets demonstrate the superiority of our model. We make available our source code at GitHub repository: https://github.com/eashanadhikarla/MemDefense

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源