论文标题

时间序列的后门攻击:一种生成方法

Backdoor Attacks on Time Series: A Generative Approach

论文作者

Jiang, Yujing, Ma, Xingjun, Erfani, Sarah Monazam, Bailey, James

论文摘要

后门攻击已成为对深度学习模型的主要安全威胁之一,因为它们可以通过在训练时间将后门触发器预先注入后门触发器来轻松控制模型的测试时间预测。尽管对图像进行了广泛的研究,但很少有作品研究了对时间序列数据的后门攻击的威胁。为了填补这一空白,在本文中,我们为时间序列的后门攻击提供了一种新颖的生成方法,以针对基于深度学习的时间序列分类器。后门攻击有两个主要目标:高隐身和高攻击成功率。我们发现,与图像相比,实现时间序列的两个目标可能更具挑战性。这是因为时间序列的输入维度较少,自由度较低,因此很难在不损害隐形的情况下获得高发作的成功率。我们的生成方法通过产生与实时系列模式一样现实的触发模式来解决这一挑战,同时实现了高攻击成功率,而不会导致清洁准确性的显着下降。我们还表明,我们提出的攻击对潜在的后门防御有抵抗力。此外,我们提出了一个新颖的通用发电机,可以用单个发电机毒化任何类型的时间序列,该发电机允许通用攻击,而无需微调新时间序列数据集的生成模型。

Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model's test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have been extensively studied on images, few works have investigated the threat of backdoor attacks on time series data. To fill this gap, in this paper we present a novel generative approach for time series backdoor attacks against deep learning based time series classifiers. Backdoor attacks have two main goals: high stealthiness and high attack success rate. We find that, compared to images, it can be more challenging to achieve the two goals on time series. This is because time series have fewer input dimensions and lower degrees of freedom, making it hard to achieve a high attack success rate without compromising stealthiness. Our generative approach addresses this challenge by generating trigger patterns that are as realistic as real-time series patterns while achieving a high attack success rate without causing a significant drop in clean accuracy. We also show that our proposed attack is resistant to potential backdoor defenses. Furthermore, we propose a novel universal generator that can poison any type of time series with a single generator that allows universal attacks without the need to fine-tune the generative model for new time series datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源