论文标题
将自我监督和监督的学习与嘈杂的标签相结合
Combining Self-Supervised and Supervised Learning with Noisy Labels
论文作者
论文摘要
由于卷积神经网络(CNN)可以轻松地过度贴合标签,这些标签在视觉分类任务中无处不在,因此强烈训练CNN对其进行训练是一个巨大的挑战。为此挑战提出了各种方法。但是,他们都不关注CNN的表示和分类器学习之间的差异。因此,受到观察的启发,即分类器对嘈杂的标签更加脆弱,而代表性则更加脆弱,并且通过自我监督的代表性学习(SSRL)技术的最新进展,我们设计了一种新方法,即CS $^3 $ nl,可以通过SSRL与Noballail and NoisiSy Labers一起获得SSRL的代表性。对合成和真实基准数据集进行了广泛的实验。结果表明,所提出的方法可以大幅度击败最先进的方法,尤其是在高噪声水平下。
Since convolutional neural networks (CNNs) can easily overfit noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train CNNs against them robustly. Various methods have been proposed for this challenge. However, none of them pay attention to the difference between representation and classifier learning of CNNs. Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i.e., CS$^3$NL, to obtain representation by SSRL without labels and train the classifier directly with noisy labels. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under a high noisy level.