论文标题

关于无监督的表示学习的最小变化

On minimal variations for unsupervised representation learning

论文作者

Cabannes, Vivien, Bietti, Alberto, Balestriero, Randall

论文摘要

无监督的表示学习旨在有效地描述原始数据以解决各种下游任务。已经采用了许多技术,例如多种技术,扩散图或最近的自我监督学习。这些技术可以说是基于以下基本假设:与未来下游任务相关的目标函数在输入空间的密集区域中具有较低的变化。将最小的变化作为无监督的表示学习背后的指导原则的最小变化为更好的实用指南铺平了自我监督学习算法的方式。

Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks. It has been approached with many techniques, such as manifold learning, diffusion maps, or more recently self-supervised learning. Those techniques are arguably all based on the underlying assumption that target functions, associated with future downstream tasks, have low variations in densely populated regions of the input space. Unveiling minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源