论文标题

正规化方法的鲁棒性和不确定性的经验评估

An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods

论文作者

Chun, Sanghyuk, Oh, Seong Joon, Yun, Sangdoo, Han, Dongyoon, Choe, Junsuk, Yoo, Youngjoon

论文摘要

尽管深度神经网络(DNN)的人体水平表现明显,但它们的行为与人类的行为不同。当在输入上应用诸如模糊和噪声之类的小腐败时,它们很容易改变预测(缺乏鲁棒性),并且经常对分布样本(不确定度措施)产生自信的预测。尽管许多研究旨在解决这些问题,但拟议的解决方案通常昂贵且复杂(例如,贝叶斯推论和对抗性培训)。同时,已经开发了许多简单且廉价的正则化方法来增强分类器的概括。这种正则化方法在很大程度上被视为解决鲁棒性和不确定性问题的基准,因为它们不是专门为此设计的。在本文中,我们提供了有关使用最新正则化方法训练的图像分类器(CIFAR-100和Imagenet)的鲁棒性和不确定性估计值的广泛经验评估。此外,实验结果表明,某些正则化方法可以用作强大的基线方法的鲁棒性和DNN的不确定性估计。

Despite apparent human-level performances of deep neural networks (DNN), they behave fundamentally differently from humans. They easily change predictions when small corruptions such as blur and noise are applied on the input (lack of robustness), and they often produce confident predictions on out-of-distribution samples (improper uncertainty measure). While a number of researches have aimed to address those issues, proposed solutions are typically expensive and complicated (e.g. Bayesian inference and adversarial training). Meanwhile, many simple and cheap regularization methods have been developed to enhance the generalization of classifiers. Such regularization methods have largely been overlooked as baselines for addressing the robustness and uncertainty issues, as they are not specifically designed for that. In this paper, we provide extensive empirical evaluations on the robustness and uncertainty estimates of image classifiers (CIFAR-100 and ImageNet) trained with state-of-the-art regularization methods. Furthermore, experimental results show that certain regularization methods can serve as strong baseline methods for robustness and uncertainty estimation of DNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源