论文标题
强大的多元时间表预测:对抗性攻击和防御机制
Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
论文作者
论文摘要
这项工作研究了对多元概率预测模型和可行的防御机制的对抗性攻击的威胁。我们的研究发现了一种新的攻击模式,该模式会对目标时间序列的预测产生负面影响,这是通过对过去少数其他时间序列的战略性,稀疏(不可察觉)的修改进行修改。为了减轻这种攻击的影响,我们制定了两种防御策略。首先,我们将先前开发的随机平滑技术扩展到分类中,并将其扩展到多元预测方案。其次,我们开发了一种对抗性训练算法,该算法学会了创建对抗性示例,同时优化了预测模型,以提高其针对这种对抗性模拟的鲁棒性。对现实世界数据集的广泛实验证实,与基线防御机制相比,我们的攻击方案很强大,我们的防御算法更有效。
This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.