论文标题

反对逃避攻击的对抗性特征选择

Adversarial Feature Selection against Evasion Attacks

论文作者

Zhang, Fei, Chan, Patrick P. K., Biggio, Battista, Yeung, Daniel S., Roli, Fabio

论文摘要

在垃圾邮件,入侵和恶意软件检测等对抗性环境中,越来越多地采用了模式识别和机器学习技术,尽管他们针对旨在通过在测试时间操纵数据来逃避检测的旨在逃避发现的攻击的安全性尚未得到彻底评估。虽然先前的工作主要集中于设计对手感知的分类算法来反驳逃避尝试,但只有很少的作者考虑了使用减少功能集对分类器安全性对相同攻击的影响的影响。一个有趣的初步结果是,应用功能选择的分类器安全性甚至可能会恶化。在本文中,我们对这一方面进行了更详细的研究,从而阐明了针对逃避攻击的特征选择的安全性。受到对手感知分类器的先前工作的启发,我们提出了一种新颖的对手感知特征选择模型,可以通过在对手的数据操纵策略上纳入特定的假设来改善分类器安全性,以防止逃避攻击。我们专注于高效,基于包装的方法的实现,并在包括垃圾邮件和恶意软件检测(包括垃圾邮件和恶意软件检测)上实验验证其合理性。

Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源