论文标题
促进勉强强大的学习者:对逆性鲁棒性的新观点
Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
论文作者
论文摘要
我们提出了一种用于提高几乎鲁棒学习者的对抗性鲁棒性的Oracle效率算法。几乎不健壮的学习算法学习预测因子,这些预测因子仅在数据分布的一个小部分$β\ ll上具有稳健性。我们提出的几乎不健壮的学习概念需要就“较大”的扰动集进行稳健性。我们表明的对于强大的学习是必不可少的,而较弱的放松不足以实现强大的学习。我们的结果揭示了两个看似无关的问题之间的定性和定量等效性:强有力的学习和几乎不健壮的学习。
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction $β\ll 1$ of the data distribution. Our proposed notion of barely robust learning requires robustness with respect to a "larger" perturbation set; which we show is necessary for strongly robust learning, and that weaker relaxations are not sufficient for strongly robust learning. Our results reveal a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning.