论文标题

通过自学补救措施无监督的转移学习

Unsupervised Transfer Learning with Self-Supervised Remedy

论文作者

Huang, Jiabo, Gong, Shaogang

论文摘要

将深层网络推广到没有手动标签的新型领域是对深度学习的挑战。由于新颖领域中图像数据分布的不可预测的变化性质,因此在本质上困难。如果没有对学习和新颖领域做出强有力的假设,则预测的知识不能很好地转移。已经研究了不同的方法以根据不同的假设(例如从域的适应到零射击和几乎没有射击的学习。在这项工作中,我们通过转移聚类来解决这个问题,旨在通过从标记的相关域中的知识传输来学习新域中未标记的目标数据的区分潜在空间。具体而言,我们希望利用相对(成对的)图像信息,该信息是可自由使用且内在的目标域,以建模目标域图像分布特征以及从相关标记的域中学到的先前知识,以启用对无标记目标数据的更具歧视性群集。我们的方法可以通过自我划分来减轻不可转移的先验知识,从而受益于转移和自我监管的学习。在四个数据集上进行图像聚类任务的大量实验揭示了我们模型比最先进的传输聚类技术的优越性。我们进一步证明了其在四个零局学习基准测试基准上的竞争性转移性。

Generalising deep networks to novel domains without manual labels is challenging to deep learning. This problem is intrinsically difficult due to unpredictable changing nature of imagery data distributions in novel domains. Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains. Different methods have been studied to address the underlying problem based on different assumptions, e.g. from domain adaptation to zero-shot and few-shot learning. In this work, we address this problem by transfer clustering that aims to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains. Specifically, we want to leverage relative (pairwise) imagery information, which is freely available and intrinsic to a target domain, to model the target domain image distribution characteristics as well as the prior-knowledge learned from related labelled domains to enable more discriminative clustering of unlabelled target data. Our method mitigates nontransferrable prior-knowledge by self-supervision, benefiting from both transfer and self-supervised learning. Extensive experiments on four datasets for image clustering tasks reveal the superiority of our model over the state-of-the-art transfer clustering techniques. We further demonstrate its competitive transferability on four zero-shot learning benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源