论文标题

用对比初始化改善自我监督模型的微调

Improving Fine-tuning of Self-supervised Models with Contrastive Initialization

论文作者

Pan, Haolin, Guo, Yong, Deng, Qinyi, Yang, Haomin, Chen, Yiqun, Chen, Jian

论文摘要

自我监督学习(SSL)在预处理模型中取得了出色的性能,这些模型可以通过微调进一步用于下游任务。但是,这些自我监管的模型可能不会捕获有意义的语义信息,因为在对比度损失中始终将属于同一类的图像视为负对。因此,同一类的图像通常在学习的特征空间中彼此之间距离很远,这不可避免地会阻碍微调过程。为了解决这个问题,我们试图通过增强语义信息为自我监督模型提供更好的初始化。为此,我们提出了一种对比初始化(COIN)方法,该方法通过在微调之前引入额外的初始化阶段来打破标准的微调管道。广泛的实验表明,借助丰富的语义,我们的硬币显着优于现有方法,而无需引入额外的培训成本,并在多个下游任务上设置了新的最新最新。

Self-supervised learning (SSL) has achieved remarkable performance in pretraining the models that can be further used in downstream tasks via fine-tuning. However, these self-supervised models may not capture meaningful semantic information since the images belonging to the same class are always regarded as negative pairs in the contrastive loss. Consequently, the images of the same class are often located far away from each other in learned feature space, which would inevitably hamper the fine-tuning process. To address this issue, we seek to provide a better initialization for the self-supervised models by enhancing the semantic information. To this end, we propose a Contrastive Initialization (COIN) method that breaks the standard fine-tuning pipeline by introducing an extra initialization stage before fine-tuning. Extensive experiments show that, with the enriched semantics, our COIN significantly outperforms existing methods without introducing extra training cost and sets new state-of-the-arts on multiple downstream tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源