论文标题

从一无所有地创建一些东西:跨模式哈希的无监督知识蒸馏

Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing

论文作者

Hu, Hengtong, Xie, Lingxi, Hong, Richang, Tian, Qi

论文摘要

近年来,跨模式哈希(CMH)吸引了越来越多的习惯,主要是因为它的潜在能力从不同模式(尤其是视觉和语言)中映射到同一空间的能力,因此它在跨模式数据检索中变得有效。 CMH有两个主要框架,在是否需要语义监督方面彼此不同。与无监督的方法相比,监督的方法通常会享有更准确的结果,但需要更繁重的数据注释。在本文中,我们提出了一种新颖的方法,该方法可以使用无监督方法产生的输出指导监督方法。具体而言,我们利用教师优化的优化来传播知识。实验是在两个流行的CMH基准测试中进行的,即Mirflickr和NUS范围的数据集。我们的方法的表现要优于所有现有的无监督方法。

In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into the same space, so that it becomes efficient in cross-modal data retrieval. There are two main frameworks for CMH, differing from each other in whether semantic supervision is required. Compared to the unsupervised methods, the supervised methods often enjoy more accurate results, but require much heavier labors in data annotation. In this paper, we propose a novel approach that enables guiding a supervised method using outputs produced by an unsupervised method. Specifically, we make use of teacher-student optimization for propagating knowledge. Experiments are performed on two popular CMH benchmarks, i.e., the MIRFlickr and NUS-WIDE datasets. Our approach outperforms all existing unsupervised methods by a large margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源