论文标题

指导的分解跨自动编码器

Guided Variational Autoencoder for Disentanglement Learning

论文作者

Ding, Zheng, Xu, Yifan, Xu, Weijian, Parmar, Gaurav, Yang, Yang, Welling, Max, Tu, Zhuowen

论文摘要

我们提出了一种算法,引导的变分自动编码器(指导性VAE),能够通过执行潜在表示模型来学习可控的生成模型。学习目标是通过向VAE中的潜在编码/嵌入提供信号而不改变其主要的骨干结构的信号,从而保留了VAE的理想特性。我们在指导式VAE中设计了一种无监督的策略和监督策略,并观察到香草VAE的建模和控制能力增强。在无监督的策略中,我们通过引入一个轻量级解码器来指导VAE学习,该解码器学习潜在的几何转换和主要成分;在监督策略中,我们使用对抗性激发和抑制机制来鼓励对潜在变量的解开。指导性vae享有其透明度和简单性,适用于一般代表学习任务以及分离学习。在许多用于表示学习的实验中,已经观察到元学习中的综合/采样改进,更好的分类分类以及减少的元学习分类误差。

We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning. The learning objective is achieved by providing signals to the latent encoding/embedding in VAE without changing its main backbone architecture, hence retaining the desirable properties of the VAE. We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE. In the unsupervised strategy, we guide the VAE learning by introducing a lightweight decoder that learns latent geometric transformation and principal components; in the supervised strategy, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement of the latent variables. Guided-VAE enjoys its transparency and simplicity for the general representation learning task, as well as disentanglement learning. On a number of experiments for representation learning, improved synthesis/sampling, better disentanglement for classification, and reduced classification errors in meta-learning have been observed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源