论文标题

对神经网络的无监督无监督的域训练

Incremental Unsupervised Domain-Adversarial Training of Neural Networks

论文作者

Gallego, Antonio-Javier, Calvo-Zaragoza, Jorge, Fisher, Robert B.

论文摘要

在监督统计学习的背景下,通常假定训练集来自绘制测试样本的相同分布。如果不是这种情况,则学习模型的行为是不可预测的,并且取决于训练集的分布与测试集的分布之间的相似程度。研究这种情况的研究主题之一称为域的适应性。深度神经网络带来了模式识别的巨大进步,这就是为什么有许多尝试为这些模型提供良好领域适应算法的尝试。在这里,我们采取了不同的途径,并从增量的角度解决了问题,在该观点中,该模型适用于新的域迭代。我们利用现有的无监督域适应算法来确定对其真实标签更有信心的目标样本。以不同的方式分析模型的输出,以确定候选样本。然后,通过将网络提供的标签视为地面真理,然后将所选集添加到源训练集中,并重复该过程,直到所有目标样本都标记为。我们的结果报告说,关于几个数据集中的非核心情况的明显改善,也表现优于其他最先进的域适应算法。

In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation. Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good domain adaptation algorithms for these models. Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. The selected set is then added to the source training set by considering the labels provided by the network as ground truth, and the process is repeated until all target samples are labelled. Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源