论文标题

双分布检测的双表示学习

Dual Representation Learning for Out-of-Distribution Detection

论文作者

Zhao, Zhilin, Cao, Longbing

论文摘要

为了对分布样本进行分类,深层神经网络探索与标签相关的信息,并根据信息瓶颈丢弃与标签相关的弱信息。从不同于分布样本的分布中得出的分布样本可以用意外的高信心预测来分配,因为它们可以获得最小与标签相关的信息。为了区分分布样本,双重表示学习(DRL)通过探索强烈和弱标记相关的信息从分布样本中探索,使分发样本更难获得高信心预测。为了验证与标签相关的信息以学习标签 - 歧义表示形式的网络,DRL训练其辅助网络,探索剩余的弱标签相关信息,以学习分布分配歧视性表示。具体而言,对于标签 - 歧义表示,DRL通过整合与标签 - 歧义表示的不同相似表示,构建了其互补分布不同的表示形式。因此,DRL结合了标签和分布歧视性表示,以检测分布样本。实验表明,DRL的表现优于分布外检测的最新方法。

To classify in-distribution samples, deep neural networks explore strongly label-related information and discard weakly label-related information according to the information bottleneck. Out-of-distribution samples drawn from distributions differing from that of in-distribution samples could be assigned with unexpected high-confidence predictions because they could obtain minimum strongly label-related information. To distinguish in- and out-of-distribution samples, Dual Representation Learning (DRL) makes out-of-distribution samples harder to have high-confidence predictions by exploring both strongly and weakly label-related information from in-distribution samples. For a pretrained network exploring strongly label-related information to learn label-discriminative representations, DRL trains its auxiliary network exploring the remaining weakly label-related information to learn distribution-discriminative representations. Specifically, for a label-discriminative representation, DRL constructs its complementary distribution-discriminative representation by integrating diverse representations less similar to the label-discriminative representation. Accordingly, DRL combines label- and distribution-discriminative representations to detect out-of-distribution samples. Experiments show that DRL outperforms the state-of-the-art methods for out-of-distribution detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源