论文标题

了解分布概括的故障模式

Understanding the Failure Modes of Out-of-Distribution Generalization

论文作者

Nagarajan, Vaishnavh, Andreassen, Anders, Neyshabur, Behnam

论文摘要

实证研究表明,机器学习模型通常依赖于仅在训练时间期间与标签相关的背景特征,例如在训练时间内与标签相关,从而导致测试时间的准确性差。在这项工作中,我们通过解释为什么在易于学习的任务中解释模型失败的原因来确定引起这种行为的基本因素,在这些方式中,人们期望这些模型成功。特别是,通过对一些易于学习的任务进行梯度培训的线性分类器的理论研究,我们发现了两种互补的失败模式。这些模式来自虚假的相关性如何引起数据中的两种偏斜:一种本质上的几何形状,另一种是自然界的统计。最后,我们构建了图像分类数据集的自然修改,以了解这些故障模式何时在实践中出现。我们还设计实验以隔离这些数据集上的现代神经网络时的两个故障模式。

Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining why models fail this way {\em even} in easy-to-learn tasks where one would expect these models to succeed. In particular, through a theoretical study of gradient-descent-trained linear classifiers on some easy-to-learn tasks, we uncover two complementary failure modes. These modes arise from how spurious correlations induce two kinds of skews in the data: one geometric in nature, and another, statistical in nature. Finally, we construct natural modifications of image classification datasets to understand when these failure modes can arise in practice. We also design experiments to isolate the two failure modes when training modern neural networks on these datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源