论文标题

最佳运输符合嘈杂的标签稳健的损失和域适应性的混合正则化

Optimal transport meets noisy label robust loss and MixUp regularization for domain adaptation

论文作者

Fatras, Kilian, Naganuma, Hiroki, Mitliagkas, Ioannis

论文摘要

在计算机视觉中,面对域移动是很常见的:具有相同类但采集条件不同的图像。在域适应性(DA)中,人们希望使用源标记的图像对未标记的目标图像进行分类。不幸的是,在源训练集中训练的深度神经网络在不属于训练领域的目标图像上表现不佳。改善这些性能的一种策略是使用最佳传输(OT)在嵌入式空间中对齐源和目标图像分布。但是,OT可能会导致负转移,即将样品与不同的标签对齐,这会导致过度拟合,尤其是在域之间存在标签移动的情况下。在这项工作中,我们通过将其解释为目标图像的嘈杂标签分配来减轻负面对准。然后,我们通过适当的正则化来减轻其效果。我们建议将混合正则化\ citep {zhang2018 -mixup}与噪声标签强大的损失,以提高域的适应性性能。我们在一项广泛的消融研究中表明,两种技术的结合对于提高性能至关重要。最后,我们在几个基准和实际DA问题上评估了称为\ textsc {mixunbot}的方法。

It is common in computer vision to be confronted with domain shift: images which have the same class but different acquisition conditions. In domain adaptation (DA), one wants to classify unlabeled target images using source labeled images. Unfortunately, deep neural networks trained on a source training set perform poorly on target images which do not belong to the training domain. One strategy to improve these performances is to align the source and target image distributions in an embedded space using optimal transport (OT). However OT can cause negative transfer, i.e. aligning samples with different labels, which leads to overfitting especially in the presence of label shift between domains. In this work, we mitigate negative alignment by explaining it as a noisy label assignment to target images. We then mitigate its effect by appropriate regularization. We propose to couple the MixUp regularization \citep{zhang2018mixup} with a loss that is robust to noisy labels in order to improve domain adaptation performance. We show in an extensive ablation study that a combination of the two techniques is critical to achieve improved performance. Finally, we evaluate our method, called \textsc{mixunbot}, on several benchmarks and real-world DA problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源