论文标题
在无矩阵的逆向扰动中,用于黑盒攻击
On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks
论文作者
论文摘要
通常,叠加在输入上的对抗性扰动是深度神经网络(DNN)的现实威胁。在本文中,我们建议将这种对抗性扰动应用于仅需要访问输入输出关系的黑盒攻击的实用生成方法。因此,攻击者在不调用内部功能和/或访问DNN的内部状态的情况下会产生这种扰动。与较早的研究不同,本研究中提出的扰动的算法需要更少的查询试验。此外,为了显示提取的对抗性扰动的有效性,我们尝试使用DNN进行语义分割。结果表明,与使用相同大小的均匀分布的随机噪声相比,由于产生的扰动而容易欺骗网络。
In general, adversarial perturbations superimposed on inputs are realistic threats for a deep neural network (DNN). In this paper, we propose a practical generation method of such adversarial perturbation to be applied to black-box attacks that demand access to an input-output relationship only. Thus, the attackers generate such perturbation without invoking inner functions and/or accessing the inner states of a DNN. Unlike the earlier studies, the algorithm to generate the perturbation presented in this study requires much fewer query trials. Moreover, to show the effectiveness of the adversarial perturbation extracted, we experiment with a DNN for semantic segmentation. The result shows that the network is easily deceived with the perturbation generated than using uniformly distributed random noise with the same magnitude.