论文标题

分布数据的分布式检测到分布数据

Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data

论文作者

Zhao, Zhilin, Cao, Longbing, Lin, Kun-Yu

论文摘要

用于图像分类的深度神经网络只会学会将分发输入映射到其相应的地面真相标签,而无需区分分布式样本与分布情况。这是由于所有样品都是独立且分布相同(IID)而没有分布区别的假设。因此,从分布样本中学到的验证网络将分布的样本视为分布,并在测试阶段对它们进行高信心预测。为了解决这个问题,我们从培训分配样本附近分布中绘制了分布式样本,以学习拒绝对分布输入的预测。通过假设通过混合多个分发样本而产生的分布样本不会共享其组成部分相同的类别,从而引入了\ textit {跨级附近分布}。因此,我们通过从跨级附近分布中得出的分布样本对鉴定样本来提高审计网络的可区分性,其中每个分布输入输入都对应于互补标签。各种内部/分布数据集的实验表明,所提出的方法在改善区分分布样本和分布外样品的能力方面显着优于现有方法。

Deep neural networks for image classification only learn to map in-distribution inputs to their corresponding ground truth labels in training without differentiating out-of-distribution samples from in-distribution ones. This results from the assumption that all samples are independent and identically distributed (IID) without distributional distinction. Therefore, a pretrained network learned from in-distribution samples treats out-of-distribution samples as in-distribution and makes high-confidence predictions on them in the test phase. To address this issue, we draw out-of-distribution samples from the vicinity distribution of training in-distribution samples for learning to reject the prediction on out-of-distribution inputs. A \textit{Cross-class Vicinity Distribution} is introduced by assuming that an out-of-distribution sample generated by mixing multiple in-distribution samples does not share the same classes of its constituents. We thus improve the discriminability of a pretrained network by finetuning it with out-of-distribution samples drawn from the cross-class vicinity distribution, where each out-of-distribution input corresponds to a complementary label. Experiments on various in-/out-of-distribution datasets show that the proposed method significantly outperforms the existing methods in improving the capacity of discriminating between in- and out-of-distribution samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源