论文标题
基于类似类似的标签平滑以进行置信度校准
Class-Similarity Based Label Smoothing for Confidence Calibration
论文作者
论文摘要
产生信心校准的产出对于深层神经网络在安全至关重要的决策系统中的应用至关重要。神经网络的输出是概率分布,其中得分是属于相应类的输入的自信,因此它们代表了相对于所有类别的输出可能性的完整估计。在本文中,我们提出了一种新型的标签平滑形式,以提高置信度校准。由于不同的类别具有不同的内在相似性,因此更相似的类应导致最终输出中的概率值更接近。这激发了新的平滑标签的开发,其中标签值基于与参考类别的相似性。我们采用不同的相似性测量,包括那些捕获基于特征的相似性或语义相似性的测量值。我们通过在各种数据集和网络体系结构上进行的广泛实验证明,我们的方法始终优于最先进的校准技术,包括统一标签平滑。
Generating confidence calibrated outputs is of utmost importance for the applications of deep neural networks in safety-critical decision-making systems. The output of a neural network is a probability distribution where the scores are estimated confidences of the input belonging to the corresponding classes, and hence they represent a complete estimate of the output likelihood relative to all classes. In this paper, we propose a novel form of label smoothing to improve confidence calibration. Since different classes are of different intrinsic similarities, more similar classes should result in closer probability values in the final output. This motivates the development of a new smooth label where the label values are based on similarities with the reference class. We adopt different similarity measurements, including those that capture feature-based similarities or semantic similarity. We demonstrate through extensive experiments, on various datasets and network architectures, that our approach consistently outperforms state-of-the-art calibration techniques including uniform label smoothing.