论文标题
评估和改善基于机器学习的网络入侵探测器的对抗性鲁棒性
Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors
论文作者
论文摘要
机器学习(ML),尤其是深度学习(DL)技术已越来越多地用于基于异常的网络入侵检测系统(NIDS)。但是,ML/DL已证明非常容易受到对抗性攻击的影响,尤其是在此类安全敏感的系统中。已经提出了许多对抗性攻击,以评估基于ML的NIDS的鲁棒性。不幸的是,现有的攻击主要集中在功能空间和/或白框攻击上,这些攻击在现实世界中做出了不切实际的假设,因此对实用的灰色/黑色盒子攻击的研究在很大程度上没有探索。 为了弥合这一差距,我们对灰色/黑盒交通空间对抗攻击进行了首次系统研究,以评估基于ML的NIDS的鲁棒性。在以下方面,我们的工作优于以前的工作:(i)实用的 - 拟议的攻击可以自动以极有限的知识和负担得起的开销来自动变异,同时保持其功能; (ii)通用 - 提议的攻击对于使用不同的ML/DL模型和非基于基于荷载的功能来评估各种NIDS的鲁棒性有效; (iii)可解释 - 我们提出了一种基于ML的NIDSS脆弱性的解释方法。基于此,我们还提出了针对对抗性攻击的防御计划,以提高系统鲁棒性。我们使用各种特征集和ML/DL模型广泛评估了各种NIDS的鲁棒性。实验结果表明,我们的攻击是有效的(例如,在一半的Kitsune中,逃避率> 97%的逃避率,最先进的NID)具有负担得起的执行成本,并且建议的防御方法可以有效地减轻此类攻击(在大多数情况下,逃避率降低了> 50%)。
Machine learning (ML), especially deep learning (DL) techniques have been increasingly used in anomaly-based network intrusion detection systems (NIDS). However, ML/DL has shown to be extremely vulnerable to adversarial attacks, especially in such security-sensitive systems. Many adversarial attacks have been proposed to evaluate the robustness of ML-based NIDSs. Unfortunately, existing attacks mostly focused on feature-space and/or white-box attacks, which make impractical assumptions in real-world scenarios, leaving the study on practical gray/black-box attacks largely unexplored. To bridge this gap, we conduct the first systematic study of the gray/black-box traffic-space adversarial attacks to evaluate the robustness of ML-based NIDSs. Our work outperforms previous ones in the following aspects: (i) practical-the proposed attack can automatically mutate original traffic with extremely limited knowledge and affordable overhead while preserving its functionality; (ii) generic-the proposed attack is effective for evaluating the robustness of various NIDSs using diverse ML/DL models and non-payload-based features; (iii) explainable-we propose an explanation method for the fragile robustness of ML-based NIDSs. Based on this, we also propose a defense scheme against adversarial attacks to improve system robustness. We extensively evaluate the robustness of various NIDSs using diverse feature sets and ML/DL models. Experimental results show our attack is effective (e.g., >97% evasion rate in half cases for Kitsune, a state-of-the-art NIDS) with affordable execution cost and the proposed defense method can effectively mitigate such attacks (evasion rate is reduced by >50% in most cases).