论文标题

逃离您的老师:通过一种新颖的自我监督方法理解Byol

Run Away From your Teacher: Understanding BYOL by a Novel Self-Supervised Approach

论文作者

Shi, Haizhou, Luo, Dongliang, Tang, Siliang, Wang, Jian, Zhuang, Yueting

论文摘要

最近,一个新提出的自我监督框架引导您自己的潜伏(BYOL)严重挑战了对比的学习框架中负面样本的必要性。尽管它完全丢弃了负面样品,但没有措施防止其训练目标崩溃,但Byol的工作就像是魅力。在本文中,我们建议从我们提出的可解释的自我监督的学习框架的观点中理解Byol,从您的老师(RAFT)逃走。 RAFT同时优化了两个目标:(i)将相同数据的两个视图与相似表示形式对齐,以及(ii)逃离模型的平均教师(MT,历史记录模型的指数移动平均值),而不是BYOL向其奔跑。 RAFT的第二项明确防止表示形式崩溃,从而使Raft在概念上是更可靠的框架。我们在CIFAR10上提供筏的基本基准,以验证我们方法的有效性。此外,我们证明BYOL在某些条件下等同于筏,为BYOL的违反直觉成功提供了坚实的推理。

Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive learning frameworks. BYOL works like a charm despite the fact that it discards the negative samples completely and there is no measure to prevent collapse in its training objective. In this paper, we suggest understanding BYOL from the view of our proposed interpretable self-supervised learning framework, Run Away From your Teacher (RAFT). RAFT optimizes two objectives at the same time: (i) aligning two views of the same data to similar representations and (ii) running away from the model's Mean Teacher (MT, the exponential moving average of the history models) instead of BYOL's running towards it. The second term of RAFT explicitly prevents the representation collapse and thus makes RAFT a more conceptually reliable framework. We provide basic benchmarks of RAFT on CIFAR10 to validate the effectiveness of our method. Furthermore, we prove that BYOL is equivalent to RAFT under certain conditions, providing solid reasoning for BYOL's counter-intuitive success.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源