论文标题

迈向分类器的鲁棒性的内在定义

Towards an Intrinsic Definition of Robustness for a Classifier

论文作者

Giraudon, Théo, Gripon, Vincent, Löwe, Matthias, Vermet, Franck

论文摘要

在过去的几年中,分类器的鲁棒性已成为至关重要的问题。确实,已经表明,最先进的深度学习体系结构很容易被其输入的不可察觉的变化所愚弄。因此,在训练有素的分类器中找到良好的鲁棒性度量是该领域的关键问题。在本文中,我们指出,在验证集中平均样品的鲁棒性半径是统计上弱的度量。相反,我们建议根据样本的难度来加权样本的重要性。我们使用逻辑回归来激励提出的分数,在该理论案例研究中,我们表明所提出的分数独立于评估其样本的选择。我们还从经验上证明了所提出的分数测量分类器的鲁棒性,而在更复杂的环境中(包括深度卷积神经网络和真实数据集)中的样品的选择几乎没有依赖。

The robustness of classifiers has become a question of paramount importance in the past few years. Indeed, it has been shown that state-of-the-art deep learning architectures can easily be fooled with imperceptible changes to their inputs. Therefore, finding good measures of robustness of a trained classifier is a key issue in the field. In this paper, we point out that averaging the radius of robustness of samples in a validation set is a statistically weak measure. We propose instead to weight the importance of samples depending on their difficulty. We motivate the proposed score by a theoretical case study using logistic regression, where we show that the proposed score is independent of the choice of the samples it is evaluated upon. We also empirically demonstrate the ability of the proposed score to measure robustness of classifiers with little dependence on the choice of samples in more complex settings, including deep convolutional neural networks and real datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源