论文标题
阅读:将重建错误汇总到分布式检测中
READ: Aggregating Reconstruction Error into Out-of-distribution Detection
论文作者
论文摘要
检测到分布(OOD)样本对于在现实世界中的分类器的安全部署至关重要。但是,已知深度神经网络对异常数据过于自信。现有工作直接设计得分功能,通过挖掘分类器(ID)和OOD的分类器的不一致性。在本文中,我们基于以下假设,即对ID数据进行训练的自动编码器无法重建OOD和ID,我们进一步补充了这种不一致性。我们提出了一种新颖的方法,读取(重建误差汇总检测器),以统一分类器和自动编码器的不一致。具体而言,原始像素的重建误差转换为分类器的潜在空间。我们表明,转换的重建误差桥接了语义差距,并从原始的传承了检测性能。此外,我们提出了一种调整策略,以根据OOD数据的细粒度表征来减轻自动编码器的过度自信问题。在两种情况下,我们分别提出了方法的两个变体,即仅基于预先训练的分类器和读取者(欧几里得距离),仅读取MD(Mahalanobis距离),这些分类器(Euclidean距离)恢复了分类器。我们的方法不需要访问测试时间的OOD数据来进行微调超参数。最后,我们通过与最新的OOD检测算法进行了广泛的比较来证明所提出的方法的有效性。在CIFAR-10预先训练的WideSnet上,与以前的最新ART相比,我们的方法将平均FPR@95TPR降低了9.8%。
Detecting out-of-distribution (OOD) samples is crucial to the safe deployment of a classifier in the real world. However, deep neural networks are known to be overconfident for abnormal data. Existing works directly design score function by mining the inconsistency from classifier for in-distribution (ID) and OOD. In this paper, we further complement this inconsistency with reconstruction error, based on the assumption that an autoencoder trained on ID data can not reconstruct OOD as well as ID. We propose a novel method, READ (Reconstruction Error Aggregated Detector), to unify inconsistencies from classifier and autoencoder. Specifically, the reconstruction error of raw pixels is transformed to latent space of classifier. We show that the transformed reconstruction error bridges the semantic gap and inherits detection performance from the original. Moreover, we propose an adjustment strategy to alleviate the overconfidence problem of autoencoder according to a fine-grained characterization of OOD data. Under two scenarios of pre-training and retraining, we respectively present two variants of our method, namely READ-MD (Mahalanobis Distance) only based on pre-trained classifier and READ-ED (Euclidean Distance) which retrains the classifier. Our methods do not require access to test time OOD data for fine-tuning hyperparameters. Finally, we demonstrate the effectiveness of the proposed methods through extensive comparisons with state-of-the-art OOD detection algorithms. On a CIFAR-10 pre-trained WideResNet, our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art.