论文标题
VOS:通过虚拟异常综合学习您不知道的知识
VOS: Learning What You Don't Know by Virtual Outlier Synthesis
论文作者
论文摘要
由于其在神经网络的安全部署中的重要性,最近分布(OOD)的检测受到了很多关注。关键挑战之一是,模型缺乏来自未知数据的监督信号,因此可以对OOD数据产生过度自信的预测。以前的方法依靠实际离群数据集进行模型正则化,这在实践中可能是昂贵的,有时甚至是不可行的。在本文中,我们提出了VOS,这是一个新颖的OOD检测框架,通过自适应合成虚拟异常值,可以在训练过程中有意义地使模型的决策边界正规化。具体而言,VOS从特征空间中估计的类条件分布的低样本区域进行了虚拟异常值。除了,我们引入了一个新颖的不知名训练目标,该目标将ID数据与合成的离群数据相反地塑造了不确定性空间。 VOS在对象检测和图像分类模型上都达到了竞争性能,与对象检测器上的先前最佳方法相比,FPR95最多将FPR95降低了9.36%。代码可从https://github.com/deeplearning-wisc/vos获得。
Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on real outlier datasets for model regularization, which can be costly and sometimes infeasible to obtain in practice. In this paper, we present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training. Specifically, VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. Alongside, we introduce a novel unknown-aware training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. VOS achieves competitive performance on both object detection and image classification models, reducing the FPR95 by up to 9.36% compared to the previous best method on object detectors. Code is available at https://github.com/deeplearning-wisc/vos.