论文标题

SELC:自我安装标签校正可以用嘈杂的标签改善学习

SELC: Self-Ensemble Label Correction Improves Learning with Noisy Labels

论文作者

Lu, Yangdi, He, Wenbo

论文摘要

深度神经网络容易拟合嘈杂的标签,从而导致概括性能差。为了克服这个问题,我们提出了一个简单有效的方法自我启动标签校正(SELC),以逐步纠正噪声标签并完善模型。我们更深入地研究了带有嘈杂标签的训练中的记忆行为,并观察到网络输出在早期阶段是可靠的。为了保留这些可靠的知识,SELC使用网络输出的指数移动平均值形成的集合预测来更新原始噪声标签。我们表明,使用SELC的培训通过逐渐减少嘈杂标签的监督并增加集成预测的监督来完善模型。尽管它很简单,但与许多最先进的方法相比,SELC在存在班级条件,依赖实例和现实世界标签噪声的情况下获得了更有希望和稳定的结果。该代码可在https://github.com/maclll/selc上找到。

Deep neural networks are prone to overfitting noisy labels, resulting in poor generalization performance. To overcome this problem, we present a simple and effective method self-ensemble label correction (SELC) to progressively correct noisy labels and refine the model. We look deeper into the memorization behavior in training with noisy labels and observe that the network outputs are reliable in the early stage. To retain this reliable knowledge, SELC uses ensemble predictions formed by an exponential moving average of network outputs to update the original noisy labels. We show that training with SELC refines the model by gradually reducing supervision from noisy labels and increasing supervision from ensemble predictions. Despite its simplicity, compared with many state-of-the-art methods, SELC obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise. The code is available at https://github.com/MacLLL/SELC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源