论文标题

对视频识别网络的无线对抗闪烁攻击

Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks

论文作者

Pony, Roi, Naeh, Itay, Mannor, Shie

论文摘要

与图像分类网络一样,用于视频分类的深度神经网络可能会受到对抗操作。图像分类器和视频分类器之间的主要区别在于,后者通常使用视频中包含的时间信息。在这项工作中,我们提出了一种操纵计划,用于通过引入闪烁的时间扰动来欺骗视频分类器,在某些情况下,人类观察者可能无法引起人们的注意,并且在现实世界中是可实现的。在展示了对单个视频的动作分类的操纵之后,我们概括了使普遍的对抗扰动的程序,从而达到了高愚蠢的比率。此外,我们概括了通用扰动并产生时间不变的扰动,可以将其应用于视频而不将扰动同步到输入。攻击是在几个目标模型上实施的,并证明了攻击的可传递性。这些属性使我们能够弥合模拟环境和现实应用程序之间的差距,正如本文首次进行过度闪烁的攻击时所证明的那样。

Deep neural networks for video classification, just like image classification networks, may be subjected to adversarial manipulation. The main difference between image classifiers and video classifiers is that the latter usually use temporal information contained within the video. In this work we present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation that in some cases may be unnoticeable by human observers and is implementable in the real world. After demonstrating the manipulation of action classification of single videos, we generalize the procedure to make universal adversarial perturbation, achieving high fooling ratio. In addition, we generalize the universal perturbation and produce a temporal-invariant perturbation, which can be applied to the video without synchronizing the perturbation to the input. The attack was implemented on several target models and the transferability of the attack was demonstrated. These properties allow us to bridge the gap between simulated environment and real-world application, as will be demonstrated in this paper for the first time for an over-the-air flickering attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源