论文标题
在$β$ -VAE中的隐式正规化上
On Implicit Regularization in $β$-VAEs
论文作者
论文摘要
虽然固定生成模型中变异推理(VI)对后推理的影响是充分的,但在变分自动编码器(VAE)中使用时,其在正规化学习生成模型(VAE)方面的作用很少。我们从两个角度研究了变异分布对生成模型中学习的正则影响。首先,我们分析了各种家族在限制最佳生成模型的集合通过限制学习模型的独特性中的作用。其次,我们研究了变分家族对解码模型局部几何形状的正则效应。该分析揭示了$β$ -VAE目标中隐含的正规器,并导致近似值由确定性自动编码目标以及依赖于解码模型的Hessian或Jacobian的分析正则化器组成,并统一VAES与最新的Heuristics统一了用于培训正则化的自动配置器。我们从经验上验证了这些发现,观察到所提出的确定性目标在客观价值和样本质量方面表现出与$β$ -VAE相似的行为。
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood. We study the regularizing effects of variational distributions on learning in generative models from two perspectives. First, we analyze the role that the choice of variational family plays in imparting uniqueness to the learned model by restricting the set of optimal generative models. Second, we study the regularization effect of the variational family on the local geometry of the decoding model. This analysis uncovers the regularizer implicit in the $β$-VAE objective, and leads to an approximation consisting of a deterministic autoencoding objective plus analytic regularizers that depend on the Hessian or Jacobian of the decoding model, unifying VAEs with recent heuristics proposed for training regularized autoencoders. We empirically verify these findings, observing that the proposed deterministic objective exhibits similar behavior to the $β$-VAE in terms of objective value and sample quality.