论文标题

对抗扰动对图像过滤器的搜索空间

Search Space of Adversarial Perturbations against Image Filters

论文作者

Thang, Dang Duy, Matsui, Toshihiro

论文摘要

深度学习表现的优势本身受到安全问题的威胁。最近的发现表明,深度学习系统对对抗性例子非常薄弱,这种攻击形式是由于攻击者欺骗深度学习系统的意图而改变的。有许多建议的防御方法可以保护深度学习系统免受对抗例子的影响。但是,仍然缺乏欺骗这些防御方法的主要策略。每当提出特定的对策时,都会发明一种新的强大的对抗攻击来欺骗这种对策。在这项研究中,我们专注于研究在搜索空间中针对使用图像过滤器的防御方法创建对抗性模式的能力。具有图像分类任务的Imagenet数据集上进行的实验结果显示了对抗扰动的搜索空间与过滤器之间的相关性。这些发现为建立更强大的进攻方法开辟了一个新的方向,以实现深度学习系统。

The superiority of deep learning performance is threatened by safety issues for itself. Recent findings have shown that deep learning systems are very weak to adversarial examples, an attack form that was altered by the attacker's intent to deceive the deep learning system. There are many proposed defensive methods to protect deep learning systems against adversarial examples. However, there is still a lack of principal strategies to deceive those defensive methods. Any time a particular countermeasure is proposed, a new powerful adversarial attack will be invented to deceive that countermeasure. In this study, we focus on investigating the ability to create adversarial patterns in search space against defensive methods that use image filters. Experimental results conducted on the ImageNet dataset with image classification tasks showed the correlation between the search space of adversarial perturbation and filters. These findings open a new direction for building stronger offensive methods towards deep learning systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源