论文标题

无监督域适应的生成伪标签细化

Generative Pseudo-label Refinement for Unsupervised Domain Adaptation

论文作者

Morerio, Pietro, Volpi, Riccardo, Ragonesi, Ruggero, Murino, Vittorio

论文摘要

我们调查并表征有条件生成对抗网络(CGAN)对其条件标签中的噪声的固有弹性,并在无监督的域适应性(UDA)的背景下利用这一事实。在UDA中,可以使用在标记的源组上训练的分类器来推断未标记的目标集上的伪标记。但是,这将导致大量错误分类的示例(由于众所周知的域移位问题),可以将其解释为目标集合的地面真相标签中的噪声注入。我们表明,在某种程度上,CGAN在某种程度上对这种“转移噪声”是有力的。实际上,接受嘈杂的伪标签训练的CGAN能够过滤这种噪声并生成更清洁的目标样品。我们在迭代过程中利用了这一发现,在该过程中,生成模型和分类器是共同训练的:依次,生成器允许从目标分布中采样清洁数据,并且分类器允许将更好的标签关联到目标样品,逐步完善目标伪标签。共同基准测试的结果表明,我们的方法与无监督的域适应状态相当或相当。

We investigate and characterize the inherent resilience of conditional Generative Adversarial Networks (cGANs) against noise in their conditioning labels, and exploit this fact in the context of Unsupervised Domain Adaptation (UDA). In UDA, a classifier trained on the labelled source set can be used to infer pseudo-labels on the unlabelled target set. However, this will result in a significant amount of misclassified examples (due to the well-known domain shift issue), which can be interpreted as noise injection in the ground-truth labels for the target set. We show that cGANs are, to some extent, robust against such "shift noise". Indeed, cGANs trained with noisy pseudo-labels, are able to filter such noise and generate cleaner target samples. We exploit this finding in an iterative procedure where a generative model and a classifier are jointly trained: in turn, the generator allows to sample cleaner data from the target distribution, and the classifier allows to associate better labels to target samples, progressively refining target pseudo-labels. Results on common benchmarks show that our method performs better or comparably with the unsupervised domain adaptation state of the art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源