论文标题

部分可观测时空混沌系统的无模型预测

Adversarial Knowledge Transfer from Unlabeled Data

论文作者

Gupta, Akash, Panda, Rameswar, Paul, Sujoy, Zhang, Jianming, Roy-Chowdhury, Amit K.

论文摘要

尽管机器学习方法的视觉识别可以带来巨大的希望,但大多数现有方法在很大程度上依赖大量标记的培训数据的可用性。但是,在绝大多数现实世界中,由于标记数据的成本或给定域中的数据稀少,手动收集如此大的标签数据集是不可行的。在本文中,我们提出了一个新颖的对抗知识转移(AKT)框架,用于从互联网规模的未标记数据转移知识,以提高分类器在给定的视觉识别任务上的性能。所提出的对抗学习框架将未标记的源数据的特征空间与标记的目标数据保持一致,以便可以使用目标分类器来预测源数据上的伪标签。我们方法的一个重要新颖方面是,未标记的源数据可能与标记的目标数据的类别不同,并且与某些现有方法不同,无需定义单独的借口任务。广泛的实验很好地表明,使用我们的方法学到的模型在多个标准数据集上的各种视觉识别任务中都具有许多希望。

While machine learning approaches to visual recognition offer great promise, most of the existing methods rely heavily on the availability of large quantities of labeled training data. However, in the vast majority of real-world settings, manually collecting such large labeled datasets is infeasible due to the cost of labeling data or the paucity of data in a given domain. In this paper, we present a novel Adversarial Knowledge Transfer (AKT) framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier on a given visual recognition task. The proposed adversarial learning framework aligns the feature space of the unlabeled source data with the labeled target data such that the target classifier can be used to predict pseudo labels on the source data. An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task, unlike some existing approaches. Extensive experiments well demonstrate that models learned using our approach hold a lot of promise across a variety of visual recognition tasks on multiple standard datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源