论文标题

关于不切实际的对抗性强化对现实的对抗攻击的经验有效性

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

论文作者

Dyrmishi, Salijona, Ghamizi, Salah, Simonetto, Thibault, Traon, Yves Le, Cordy, Maxime

论文摘要

尽管有关安全攻击和机器学习辩护(ML)系统的文献主要集中在不现实的对抗性示例上,但最近的研究引起了人们对现实的对抗性攻击的不足探索领域的关注,及其对现实世界系统稳健性的影响。我们的论文为更好地理解对现实攻击的对抗性鲁棒性并做出了两个主要贡献铺平了道路。首先,我们对三种现实世界中的用例进行研究(文本分类,僵​​尸网络检测,恶意软件检测))和五个数据集,以评估是否可以使用不现实的对抗性示例来保护模型免受现实示例。我们的结果揭示了在用例中的差异,在这种情况下,不切实际的例子可以像现实的示例一样有效,或者可能只能提供有限的改进。其次,为了解释这些结果,我们分析了具有现实和不切实际攻击产生的对抗示例的潜在表示。我们阐明了歧视哪些不切实际的例子可用于有效硬化的模式。我们发布我们的代码,数据集和模型,以支持未来的研究,以探索如何减少不现实和现实的对抗攻击之间的差距。

While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems. Our paper paves the way for a better understanding of adversarial robustness against realistic attacks and makes two major contributions. First, we conduct a study on three real-world use cases (text classification, botnet detection, malware detection)) and five datasets in order to evaluate whether unrealistic adversarial examples can be used to protect models against realistic examples. Our results reveal discrepancies across the use cases, where unrealistic examples can either be as effective as the realistic ones or may offer only limited improvement. Second, to explain these results, we analyze the latent representation of the adversarial examples generated with realistic and unrealistic attacks. We shed light on the patterns that discriminate which unrealistic examples can be used for effective hardening. We release our code, datasets and models to support future research in exploring how to reduce the gap between unrealistic and realistic adversarial attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源