论文标题

通过预测约束从稀疏数据中学习一致的深度生成模型

Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints

论文作者

Hope, Gabriel, Abdrakhmanova, Madina, Chen, Xiaoyin, Hughes, Michael C., Hughes, Michael C., Sudderth, Erik B.

论文摘要

我们开发了一个新的框架,用于学习各种自动编码器和其他深层生成模型,以平衡生成性和歧视性目标。我们的框架优化了模型参数,以最大程度地利用观察到的数据的可能性,但要受到特定于任务的预测约束,从而防止模型错误指定导致不准确的预测。我们进一步执行了从生成模型自然得出的一致性约束,该模型需要对重建数据进行预测以匹配原始数据上的数据。我们表明,这两个贡献(预测约束和一致性约束)会导致有希望的图像分类性能,尤其是在半监督场景中,类别标签稀疏但未标记的数据丰富。我们的方法使生成建模的进步能够直接提高半监督分类性能,这是我们通过增强具有潜在变量捕获空间转换的深层生成模型来证明的能力。

We develop a new framework for learning variational autoencoders and other deep generative models that balances generative and discriminative goals. Our framework optimizes model parameters to maximize a variational lower bound on the likelihood of observed data, subject to a task-specific prediction constraint that prevents model misspecification from leading to inaccurate predictions. We further enforce a consistency constraint, derived naturally from the generative model, that requires predictions on reconstructed data to match those on the original data. We show that these two contributions -- prediction constraints and consistency constraints -- lead to promising image classification performance, especially in the semi-supervised scenario where category labels are sparse but unlabeled data is plentiful. Our approach enables advances in generative modeling to directly boost semi-supervised classification performance, an ability we demonstrate by augmenting deep generative models with latent variables capturing spatial transformations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源