论文标题
鲁棒,准确的随机优化,用于变异推理
Robust, Accurate Stochastic Optimization for Variational Inference
论文作者
论文摘要
我们考虑使用随机优化方法拟合变异后近似的问题。这些近似值的性能取决于(1)变异家族与真实后验分布的匹配程度,(2)差异的选择以及(3)变异目标的优化。我们表明,即使在确切的后部属于假定的变异家族时,即使在最佳的情况下,如果问题维度适度大,则常见的随机优化方法也会导致变异近似值较差。我们还证明,这些方法在各种模型类型中都不强大。在这些发现的激励下,我们通过将基础优化算法视为产生马尔可夫链,从而开发出更强大,更准确的随机优化框架。我们的方法是从理论上动机的,包括收敛性和新颖的停止规则的诊断,这两种规则对目标函数的嘈杂评估都是可靠的。我们从经验上表明,所提出的框架在各种模型中都很好地工作:它可以自动检测随机优化失败或不准确的变异近似值
We consider the problem of fitting variational posterior approximations using stochastic optimization methods. The performance of these approximations depends on (1) how well the variational family matches the true posterior distribution,(2) the choice of divergence, and (3) the optimization of the variational objective. We show that even in the best-case scenario when the exact posterior belongs to the assumed variational family, common stochastic optimization methods lead to poor variational approximations if the problem dimension is moderately large. We also demonstrate that these methods are not robust across diverse model types. Motivated by these findings, we develop a more robust and accurate stochastic optimization framework by viewing the underlying optimization algorithm as producing a Markov chain. Our approach is theoretically motivated and includes a diagnostic for convergence and a novel stopping rule, both of which are robust to noisy evaluations of the objective function. We show empirically that the proposed framework works well on a diverse set of models: it can automatically detect stochastic optimization failure or inaccurate variational approximation