论文标题
为时间序列数据的对抗性攻击和防御措施进行基准测试
Benchmarking adversarial attacks and defenses for time-series data
论文作者
论文摘要
深网的对抗性脆弱性激发了全球研究人员的兴趣。毫不奇怪,像图像一样,对抗性示例也将其转化为时间序列数据,因为它们是模型本身的固有弱点,而不是模式。已经尝试了几项试图防御这些对抗性攻击,特别是对于视觉方式。在本文中,我们在时间序列数据上进行了详细的基准测试。我们将自己限制在$ l _ {\ infty} $威胁模型中。我们还探索了基于正规化的防御能力的平滑度和清洁精度之间的权衡,以更好地了解它们提供的权衡。我们的分析表明,经过探索的对抗防御能够针对强烈的白色框以及黑盒攻击。这为未来的研究铺平了道路,以对抗攻击和防御措施,尤其是时间序列数据。
The adversarial vulnerability of deep networks has spurred the interest of researchers worldwide. Unsurprisingly, like images, adversarial examples also translate to time-series data as they are an inherent weakness of the model itself rather than the modality. Several attempts have been made to defend against these adversarial attacks, particularly for the visual modality. In this paper, we perform detailed benchmarking of well-proven adversarial defense methodologies on time-series data. We restrict ourselves to the $L_{\infty}$ threat model. We also explore the trade-off between smoothness and clean accuracy for regularization-based defenses to better understand the trade-offs that they offer. Our analysis shows that the explored adversarial defenses offer robustness against both strong white-box as well as black-box attacks. This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.