论文标题
欺骗图像到图像翻译网络,用于自主驾驶的对抗扰动
Deceiving Image-to-Image Translation Networks for Autonomous Driving with Adversarial Perturbations
论文作者
论文摘要
深度神经网络(DNN)在处理计算机视觉问题方面取得了令人印象深刻的表现,但是,已经发现DNN容易受到对抗性例子的影响。由于这种原因,最近在几个方面研究了对抗性扰动。但是,大多数以前的作品都集中在图像分类任务上,并且从未对图像到图像图像(IM2IM)翻译任务进行对抗性扰动,在处理自动驾驶和机器人技术领域的配对和/或未配对的映射问题方面取得了巨大成功。本文研究了不同类型的对抗扰动,这些扰动可以欺骗IM2IM框架以实现自动驾驶目的。我们提出了可以使IM2IM模型产生意外结果的准物理和数字对抗扰动。然后,我们凭经验分析了这些扰动,并表明它们在配对以进行图像合成和未配对的设置以进行样式转移。我们还验证了IM2IM映射中断或不可能的一些扰动阈值。这些扰动的存在表明,IM2IM模型中存在至关重要的弱点。最后,我们表明我们的方法说明了扰动如何影响产出的质量,从而提高了当前SOTA网络的鲁棒性来提高自动驾驶。
Deep neural networks (DNNs) have achieved impressive performance on handling computer vision problems, however, it has been found that DNNs are vulnerable to adversarial examples. For such reason, adversarial perturbations have been recently studied in several respects. However, most previous works have focused on image classification tasks, and it has never been studied regarding adversarial perturbations on Image-to-image (Im2Im) translation tasks, showing great success in handling paired and/or unpaired mapping problems in the field of autonomous driving and robotics. This paper examines different types of adversarial perturbations that can fool Im2Im frameworks for autonomous driving purpose. We propose both quasi-physical and digital adversarial perturbations that can make Im2Im models yield unexpected results. We then empirically analyze these perturbations and show that they generalize well under both paired for image synthesis and unpaired settings for style transfer. We also validate that there exist some perturbation thresholds over which the Im2Im mapping is disrupted or impossible. The existence of these perturbations reveals that there exist crucial weaknesses in Im2Im models. Lastly, we show that our methods illustrate how perturbations affect the quality of outputs, pioneering the improvement of the robustness of current SOTA networks for autonomous driving.