论文标题
有效学习GMRF混合模型
Effective Learning of a GMRF Mixture Model
论文作者
论文摘要
考虑到可用数据的量,学习高斯混合模型(GMM)很难。作为一种补救措施,我们提出将GMM限制为高斯马尔可夫随机场混合模型(GMRF-MM),以及一种估计后者稀疏精度(即逆协方差)矩阵的新方法。当已知每个矩阵的稀疏模式时,我们为该矩阵的最大似然估计(MLE)提出了有效的优化方法。当未知的情况下,我们利用流行的图形最小的绝对收缩和选择操作员(Glasso)来估计该模式。但是,我们表明,即使对于单个高斯人,当格拉斯(Glasso)调整以成功估计稀疏模式时,它以矩阵非零条目值的实质性偏见的价格来做,我们表明这个问题只在混合设置中恶化。为了克服这一点,我们丢弃了由Glasso估计的非零值,仅保留其模式估计并将其用于所提出的MLE方法。这产生了有效的两步程序,可以消除偏见。我们表明,在单个GMRF和GMRF-MM案例中,我们的“辩护”方法都优于Glasso。我们还表明,当学习图像贴片的先验时,即使我们仅使用对稀疏模式的有根据的猜测,我们的方法也优于Glasso,而我们的GMRF-MM在真实和合成高维数据集上的基线GMM优于基线GMM。
Learning a Gaussian Mixture Model (GMM) is hard when the number of parameters is too large given the amount of available data. As a remedy, we propose restricting the GMM to a Gaussian Markov Random Field Mixture Model (GMRF-MM), as well as a new method for estimating the latter's sparse precision (i.e., inverse covariance) matrices. When the sparsity pattern of each matrix is known, we propose an efficient optimization method for the Maximum Likelihood Estimate (MLE) of that matrix. When it is unknown, we utilize the popular Graphical Least Absolute Shrinkage and Selection Operator (GLASSO) to estimate that pattern. However, we show that even for a single Gaussian, when GLASSO is tuned to successfully estimate the sparsity pattern, it does so at the price of a substantial bias of the values of the nonzero entries of the matrix, and we show that this problem only worsens in a mixture setting. To overcome this, we discard the nonzero values estimated by GLASSO, keep only its pattern estimate and use it within the proposed MLE method. This yields an effective two-step procedure that removes the bias. We show that our "debiasing" approach outperforms GLASSO in both the single-GMRF and the GMRF-MM cases. We also show that when learning priors for image patches, our method outperforms GLASSO even if we merely use an educated guess about the sparsity pattern, and that our GMRF-MM outperforms the baseline GMM on real and synthetic high-dimensional datasets.