论文标题

非参数方法何时鲁棒?

When are Non-Parametric Methods Robust?

论文作者

Bhattacharjee, Robi, Chaudhuri, Kamalika

论文摘要

越来越多的研究表明,许多分类器容易受到{\ em {对抗示例}}的影响 - 用于测试导致错误分类的输入的小型战略修改。在这项工作中,我们研究了一般的非参数方法,以了解它们何时对这些修改具有鲁棒性。我们建立了一般条件,在这些条件下,非参数方法是r频敏的 - 从某种意义上说,它们会在较大的样本限制中收敛到最佳稳健和准确的分类器。 具体而言,我们的结果表明,当数据分离良好时,最近的邻居和内核分类器是R频敏感,而直方图则不是。对于一般数据分布,我们证明了对抗性修剪的预处理(Yang等,2019) - 这使数据分开了 - 随后是最近的邻居或内核分类器,也导致R-符合性。

A growing body of research has shown that many classifiers are susceptible to {\em{adversarial examples}} -- small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent -- in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019) -- that makes data well-separated -- followed by nearest neighbors or kernel classifiers also leads to r-consistency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源