论文标题

对概率自动回归预测模型的对抗性攻击

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

论文作者

Dang-Nhu, Raphaël, Singh, Gagandeep, Bielik, Pavol, Vechev, Martin

论文摘要

我们对神经模型产生了有效的对抗攻击,这些攻击会输出一系列概率分布而不是单个值序列。该设置包括最近提出的深层概率自动回归预测模型,这些预测模型估计了时间序列的概率分布,并实现了各种应用程序域的最新结果。我们解决的关键技术挑战是通过蒙特卡洛的统计量估计输出序列的统计数据,从而有效地区分了。此外,我们将概率预测的先前工作扩展到贝叶斯环境,该设置允许对未来的观测来进行条件,而不仅仅是在过去的观察结果上。我们证明,我们的方法可以成功产生攻击,并在两项具有挑战性的任务中使用少量的输入扰动,在这种挑战性的任务中,强大的决策至关重要:股票市场交易和电力消耗的预测。

We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values. This setting includes the recently proposed deep probabilistic autoregressive forecasting models that estimate the probability distribution of a time series given its past and achieve state-of-the-art results in a diverse set of application domains. The key technical challenge we address is effectively differentiating through the Monte-Carlo estimation of statistics of the joint distribution of the output sequence. Additionally, we extend prior work on probabilistic forecasting to the Bayesian setting which allows conditioning on future observations, instead of only on past observations. We demonstrate that our approach can successfully generate attacks with small input perturbations in two challenging tasks where robust decision making is crucial: stock market trading and prediction of electricity consumption.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源