论文标题
可学习的矩阵分解的图形调查
Learnable Graph-regularization for Matrix Decomposition
论文作者
论文摘要
数据矩阵的低级近似模型已成为许多领域的机器学习和数据挖掘工具,包括计算机视觉,文本挖掘,生物信息学等。它们允许将高维数据嵌入到低维空间中,从而减轻噪声的影响并发现潜在关系。为了使学习的表示形式继承原始数据中的结构,通常将图形术语添加到损失函数中。但是,先前的图形结构通常无法反映出真正的网络连接性和内在关系。此外,许多图形调查方法无法考虑到双空间。概率模型通常用于模拟表示形式的分布,但是大多数以前的方法通常假定隐藏变量是独立的,并且为简单起见。为此,我们为矩阵分解(LGMD)提出了一个可学习的图形指导模型,该模型在图调查方法和概率矩阵分解模型之间构建了桥梁。 LGMD通过稀疏的精度矩阵估计,以迭代方式实时学习两个图形结构(即两个精度矩阵),并且对噪声和缺失的条目更为强大。广泛的数值结果和与竞争方法的比较证明了其有效性。
Low-rank approximation models of data matrices have become important machine learning and data mining tools in many fields including computer vision, text mining, bioinformatics and many others. They allow for embedding high-dimensional data into low-dimensional spaces, which mitigates the effects of noise and uncovers latent relations. In order to make the learned representations inherit the structures in the original data, graph-regularization terms are often added to the loss function. However, the prior graph construction often fails to reflect the true network connectivity and the intrinsic relationships. In addition, many graph-regularized methods fail to take the dual spaces into account. Probabilistic models are often used to model the distribution of the representations, but most of previous methods often assume that the hidden variables are independent and identically distributed for simplicity. To this end, we propose a learnable graph-regularization model for matrix decomposition (LGMD), which builds a bridge between graph-regularized methods and probabilistic matrix decomposition models. LGMD learns two graphical structures (i.e., two precision matrices) in real-time in an iterative manner via sparse precision matrix estimation and is more robust to noise and missing entries. Extensive numerical results and comparison with competing methods demonstrate its effectiveness.