论文标题

桥接开放式域适应的理论结合和深层算法

Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation

论文作者

Zhong, Li, Fang, Zhen, Liu, Feng, Yuan, Bo, Zhang, Guangquan, Lu, Jie

论文摘要

在无监督的开放式设置域适应(UOSDA)中,目标域包含未知类别,这些类别未在源域中观察到。该领域的研究人员旨在培训分类器以准确:1)识别未知目标数据(具有未知类别的数据)和2)对其他目标数据进行分类。为了实现这一目标,先前的研究证明了目标障碍风险的上限,而开放式设置差异是上限的重要术语,用于衡量未知目标数据的风险。通过最小化上限,可以训练浅层分类器以实现目标。但是,如果分类器非常灵活(例如,深神经网络(DNNS)),当最小化上限时,开放式差异将收敛到负值,这会导致大多数目标数据识别为未知数据的问题。为了解决这个问题,我们提出了UOSDA目标域风险的新上限,其中包括四个术语:源域风险,$ε$ - 开放式设置差异($δ_ε$),域之间的分配差异和常数。与开放式差异相比,$δ_ε$在最小化时对问题更为强大,因此我们能够使用非常灵活的分类器(即DNNS)。然后,我们提出了一种新的原理引入深度UOSDA方法,该方法通过最大程度地减少新上限来训练DNN。具体而言,通过梯度下降来最大程度地降低源域风险和$Δ_ε$,并且通过新型的开放式有条件对抗训练策略来最大程度地减少分布差异。最后,与现有的浅层和深度UOSDA方法相比,我们的方法显示了几个基准数据集上的最新性能,包括数字识别(MNIST,SVHN,USPS),对象识别(Office-31,Office-Home)和面部识别(PIE)。

In the unsupervised open set domain adaptation (UOSDA), the target domain contains unknown classes that are not observed in the source domain. Researchers in this area aim to train a classifier to accurately: 1) recognize unknown target data (data with unknown classes) and, 2) classify other target data. To achieve this aim, a previous study has proven an upper bound of the target-domain risk, and the open set difference, as an important term in the upper bound, is used to measure the risk on unknown target data. By minimizing the upper bound, a shallow classifier can be trained to achieve the aim. However, if the classifier is very flexible (e.g., deep neural networks (DNNs)), the open set difference will converge to a negative value when minimizing the upper bound, which causes an issue where most target data are recognized as unknown data. To address this issue, we propose a new upper bound of target-domain risk for UOSDA, which includes four terms: source-domain risk, $ε$-open set difference ($Δ_ε$), a distributional discrepancy between domains, and a constant. Compared to the open set difference, $Δ_ε$ is more robust against the issue when it is being minimized, and thus we are able to use very flexible classifiers (i.e., DNNs). Then, we propose a new principle-guided deep UOSDA method that trains DNNs via minimizing the new upper bound. Specifically, source-domain risk and $Δ_ε$ are minimized by gradient descent, and the distributional discrepancy is minimized via a novel open-set conditional adversarial training strategy. Finally, compared to existing shallow and deep UOSDA methods, our method shows the state-of-the-art performance on several benchmark datasets, including digit recognition (MNIST, SVHN, USPS), object recognition (Office-31, Office-Home), and face recognition (PIE).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源