论文标题
部分领域适应的自适应学识转移
Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation
论文作者
论文摘要
部分领域适应(PDA)引起了吸引人的关注,因为当源域标签空间替换目标域时,它处理了一个现实且具有挑战性的问题。大多数常规的域适应性(DA)工作集中在学习域不变特征上,以减轻跨域的分布差异。但是,至关重要的是减轻由PDA明确的无关源域类别造成的负面影响。在这项工作中,我们提出了一个自适应耗尽的知识转移框架($^2 $ kt),以使两个领域的相关类别对齐,以进行有效的域适应。具体而言,探索了一种自适应的机制,以逐渐滤除最自信的目标样本及其相应的源类别,从而促进正面传递,并在两个域中更多的知识。此外,由原型分类器和多层perceptron分类器组成的双不同分类器体系结构构建了构建,旨在从各个角度捕获跨域中的内在数据分布知识。通过最大化阶层间的中心差异并最大程度地减少阶层内样本的紧凑性,提出的模型能够获得共享类别数据的更多域不变和特定于任务的歧视性表示。与最先进的PDA方法相比,几个部分域适应基准的全面实验证明了我们提出的模型的有效性。
Partial domain adaptation (PDA) attracts appealing attention as it deals with a realistic and challenging problem when the source domain label space substitutes the target domain. Most conventional domain adaptation (DA) efforts concentrate on learning domain-invariant features to mitigate the distribution disparity across domains. However, it is crucial to alleviate the negative influence caused by the irrelevant source domain categories explicitly for PDA. In this work, we propose an Adaptively-Accumulated Knowledge Transfer framework (A$^2$KT) to align the relevant categories across two domains for effective domain adaptation. Specifically, an adaptively-accumulated mechanism is explored to gradually filter out the most confident target samples and their corresponding source categories, promoting positive transfer with more knowledge across two domains. Moreover, a dual distinct classifier architecture consisting of a prototype classifier and a multilayer perceptron classifier is built to capture intrinsic data distribution knowledge across domains from various perspectives. By maximizing the inter-class center-wise discrepancy and minimizing the intra-class sample-wise compactness, the proposed model is able to obtain more domain-invariant and task-specific discriminative representations of the shared categories data. Comprehensive experiments on several partial domain adaptation benchmarks demonstrate the effectiveness of our proposed model, compared with the state-of-the-art PDA methods.