论文标题
朝着对纹理识别的不可察觉的普遍攻击
Towards Imperceptible Universal Attacks on Texture Recognition
论文作者
论文摘要
尽管已证明深层神经网络(DNN)对自然图像分类问题易受图像敏捷的对抗性攻击,但此类攻击对基于DNN的纹理识别的影响尚未探索。作为我们工作的一部分,我们发现在空间域中限制扰动的$ L_P $ NORM可能不是限制通用对抗性扰动对纹理图像的易感性的合适方法。基于人类感知受局部视觉频率特征的影响,我们提出了一种频率调整的通用攻击方法来计算频域中的通用扰动。我们的实验表明,与现有的通用攻击技术相比,我们提出的方法可以产生较低的扰动,但在各种DNN纹理分类器和纹理数据集上具有相似或更高的白色盒子愚蠢率。我们还证明,我们的方法可以改善针对辩护模型的攻击鲁棒性以及纹理识别问题的跨数据库可转移性。
Although deep neural networks (DNNs) have been shown to be susceptible to image-agnostic adversarial attacks on natural image classification problems, the effects of such attacks on DNN-based texture recognition have yet to be explored. As part of our work, we find that limiting the perturbation's $l_p$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images. Based on the fact that human perception is affected by local visual frequency characteristics, we propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain. Our experiments indicate that our proposed method can produce less perceptible perturbations yet with a similar or higher white-box fooling rates on various DNN texture classifiers and texture datasets as compared to existing universal attack techniques. We also demonstrate that our approach can improve the attack robustness against defended models as well as the cross-dataset transferability for texture recognition problems.