论文标题
校准可以改善样品优先级吗?
Can Calibration Improve Sample Prioritization?
论文作者
论文摘要
校准可以减少深度神经网络的过度自信预测,但校准还可以加速训练吗?在本文中,我们表明可以使用一些示例来执行子集选择。我们研究了流行的校准技术在训练期间选择更好的样品子集(也称为样品优先级)的影响,并观察到校准可以提高子集的质量,减少每个时期的示例数量(至少降低70%),从而可以加快整体训练过程。我们进一步研究了在训练过程中使用校准的预训练模型以及校准的效果,以指导样品优先级,这似乎再次提高了所选样品的质量。
Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training? In this paper, we show that it can when used to prioritize some examples for performing subset selection. We study the effect of popular calibration techniques in selecting better subsets of samples during training (also called sample prioritization) and observe that calibration can improve the quality of subsets, reduce the number of examples per epoch (by at least 70%), and can thereby speed up the overall training process. We further study the effect of using calibrated pre-trained models coupled with calibration during training to guide sample prioritization, which again seems to improve the quality of samples selected.