论文标题
通过半监督元学习的域概括
Domain Generalization via Semi-supervised Meta Learning
论文作者
论文摘要
域概括的目的是从多个源域学习,以概括分配差异的看不见的目标域。该领域的当前最新方法是完全监督的,但是对于许多现实世界中的问题,几乎不可能获得足够的标签样品。在本文中,我们提出了第一种域概括的方法来利用未标记的样本,将元学习的情节培训和半监督学习(称为DGSML)结合在一起。 DGSML采用基于熵的伪标记方法来将标签分配给未标记的样品,然后利用新颖的差异损失来确保标记未标记样品之前和之后的类质体彼此接近。为了学习域不变表示形式,它还利用了新的对齐损失,以确保在添加未标记的样品后计算的类质心对之间的距离可以保留在不同的域中。 DGSML通过一种元学习方法来模仿输入源域和看不见的目标域之间的分布变化。基准数据集的实验结果表明,DGSML优于最先进的领域概括和半监督学习方法。
The goal of domain generalization is to learn from multiple source domains to generalize to unseen target domains under distribution discrepancy. Current state-of-the-art methods in this area are fully supervised, but for many real-world problems it is hardly possible to obtain enough labeled samples. In this paper, we propose the first method of domain generalization to leverage unlabeled samples, combining of meta learning's episodic training and semi-supervised learning, called DGSML. DGSML employs an entropy-based pseudo-labeling approach to assign labels to unlabeled samples and then utilizes a novel discrepancy loss to ensure that class centroids before and after labeling unlabeled samples are close to each other. To learn a domain-invariant representation, it also utilizes a novel alignment loss to ensure that the distance between pairs of class centroids, computed after adding the unlabeled samples, is preserved across different domains. DGSML is trained by a meta learning approach to mimic the distribution shift between the input source domains and unseen target domains. Experimental results on benchmark datasets indicate that DGSML outperforms state-of-the-art domain generalization and semi-supervised learning methods.