论文标题

自有标签的自我标记的精炼,以bootstrap自己的潜伏

Self-Labeling Refinement for Robust Representation Learning with Bootstrap Your Own Latent

论文作者

Garg, Siddhant, Jain, Dhruval

论文摘要

在这项工作中,我们朝着两个主要目标努力。首先,我们研究了在一个称为Bootstrap您自己的潜伏(BYOL)的非对抗性表示学习框架中批处理标准化(BN)层的重要性。我们进行了几项实验,以结论BN层对于代表BYOL并不是必需的。此外,BYOL仅从图像的积极对中学习,但在同一输入批次中忽略了其他语义上相似的图像。对于第二个目标,我们引入了两个新的损失函数,以在相同的图像输入批次中确定语义上相似的对,并降低其表示形式之间的距离。这些损失函数是跨肌动脉相似性损失(CCSL)和跨性相似性损失(CSSL)。使用拟议的损失功能,我们能够通过在STL10数据集中使用CCSL损失(76.87%)训练BYOL框架来超越香草Byol(71.04%)的性能。使用CSSL损失训练的BYOL对Vanilla Byol的表现相当。

In this work, we have worked towards two major goals. Firstly, we have investigated the importance of Batch Normalisation (BN) layers in a non-contrastive representation learning framework called Bootstrap Your Own Latent (BYOL). We conducted several experiments to conclude that BN layers are not necessary for representation learning in BYOL. Moreover, BYOL only learns from the positive pairs of images but ignores other semantically similar images in the same input batch. For the second goal, we have introduced two new loss functions to determine the semantically similar pairs in the same input batch of images and reduce the distance between their representations. These loss functions are Cross-Cosine Similarity Loss (CCSL) and Cross-Sigmoid Similarity Loss (CSSL). Using the proposed loss functions, we are able to surpass the performance of Vanilla BYOL (71.04%) by training the BYOL framework using CCSL loss (76.87%) on the STL10 dataset. BYOL trained using CSSL loss performs comparably with Vanilla BYOL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源