论文标题
在分布数据集上校准深度神经网络分类器
Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets
论文作者
论文摘要
为了提高深神经网络(DNN)分类器的可信赖性,一种准确的预测信心,代表正确性的真正可能性至关重要。为此,已经提出了许多事后校准方法来利用轻质模型将目标DNN的输出层映射到校准的置信度中。尽管如此,在实践中,在分布外(OOD)数据集中,目标DNN通常会以很高的信心对样本进行错误分类,从而为现有的校准方法带来了重大挑战,以产生准确的信心。在本文中,我们提出了一种新的事后置信度校准方法,称为CCAC(具有辅助类别的置信度校准),用于OOD数据集上的DNN分类器。 CCAC的主要新颖性是校准模型中的辅助类别,该类别将错误分类的样本与正确分类的样本分开,从而有效地减轻了目标DNN的自信性错误。我们还提出了简化的CCAC版本,以减少免费参数并促进转移到新的看不见的数据集。我们在不同DNN模型,数据集和应用程序上的实验表明,CCAC可以始终超过事后校准方法。
To increase the trustworthiness of deep neural network (DNN) classifiers, an accurate prediction confidence that represents the true likelihood of correctness is crucial. Towards this end, many post-hoc calibration methods have been proposed to leverage a lightweight model to map the target DNN's output layer into a calibrated confidence. Nonetheless, on an out-of-distribution (OOD) dataset in practice, the target DNN can often mis-classify samples with a high confidence, creating significant challenges for the existing calibration methods to produce an accurate confidence. In this paper, we propose a new post-hoc confidence calibration method, called CCAC (Confidence Calibration with an Auxiliary Class), for DNN classifiers on OOD datasets. The key novelty of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones, thus effectively mitigating the target DNN's being confidently wrong. We also propose a simplified version of CCAC to reduce free parameters and facilitate transfer to a new unseen dataset. Our experiments on different DNN models, datasets and applications show that CCAC can consistently outperform the prior post-hoc calibration methods.