论文标题

通过潜在规范化的代表性学习

Representation Learning Through Latent Canonicalizations

论文作者

Litany, Or, Morcos, Ari, Sridhar, Srinath, Guibas, Leonidas, Hoffman, Judy

论文摘要

我们试图在大型注释数据源上学习一个代表,该数据源使用有限的新监督将其推广到目标域。许多先前解决此问题的方法都集中在学习“解开”表示形式上,因此,随着个人因素在新领域的不同之处,只有一部分表示形式需要更新。在这项工作中,我们寻求分离表示的概括能力,但要放松明确的潜在分离的要求,而是通过要求通过学习的线性转换来操纵它们来鼓励单个变异因素的线性。我们将这些转换为潜在的规范化器,因为它们旨在将因子的价值修改为预定(但任意的)规范值(例如,将图像前景重新上色为黑色)。假设有一个访问元标记的源域指定图像中变化的因素,我们通过实验证明我们的方法有助于减少与许多监督基线相比,在概括到类似目标域所需的观测值。

We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision. Many prior approaches to this problem have focused on learning "disentangled" representations so that as individual factors vary in a new domain, only a portion of the representation need be updated. In this work, we seek the generalization power of disentangled representations, but relax the requirement of explicit latent disentanglement and instead encourage linearity of individual factors of variation by requiring them to be manipulable by learned linear transformations. We dub these transformations latent canonicalizers, as they aim to modify the value of a factor to a pre-determined (but arbitrary) canonical value (e.g., recoloring the image foreground to black). Assuming a source domain with access to meta-labels specifying the factors of variation within an image, we demonstrate experimentally that our method helps reduce the number of observations needed to generalize to a similar target domain when compared to a number of supervised baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源