论文标题
使用新型乘法主体组件分析(MPCA)改善了各种数据集的维度降低。
Improved Dimensionality Reduction of various Datasets using Novel Multiplicative Factoring Principal Component Analysis (MPCA)
论文作者
论文摘要
已知主成分分析(PCA)是最广泛应用的维度降低方法。为了在各种数据集的维度降低方面获得最佳结果,已经对传统的PCA进行了许多改进。在本文中,我们对称为乘法主体成分分析(MPCA)的传统PCA方法进行了改进。 MPCA比传统PCA的优势在于,通过乘数对发生空间施加惩罚,以使异常值在寻找预测时的效果可忽略不计。在这里,我们采用两种乘数方法,总距离和余弦相似性指标。这两种方法可以学习每个数据点与特征空间中的主要预测之间存在的关系。因此,通过迭代乘以数据,以使训练集中腐败数据的效果可忽略不计,从而获得了改善的低排名预测。实验是在Yaleb,Mnist,AR和Isalet数据集上进行的,结果与从传统的PCA,RPCA-OM等一些流行的维度降低方法中获得的结果进行了比较,以及一些最近发布的方法,例如IFPCA-1和IFPCA-2。
Principal Component Analysis (PCA) is known to be the most widely applied dimensionality reduction approach. A lot of improvements have been done on the traditional PCA, in order to obtain optimal results in the dimensionality reduction of various datasets. In this paper, we present an improvement to the traditional PCA approach called Multiplicative factoring Principal Component Analysis (MPCA). The advantage of MPCA over the traditional PCA is that a penalty is imposed on the occurrence space through a multiplier to make negligible the effect of outliers in seeking out projections. Here we apply two multiplier approaches, total distance and cosine similarity metrics. These two approaches can learn the relationship that exists between each of the data points and the principal projections in the feature space. As a result of this, improved low-rank projections are gotten through multiplying the data iteratively to make negligible the effect of corrupt data in the training set. Experiments were carried out on YaleB, MNIST, AR, and Isolet datasets and the results were compared to results gotten from some popular dimensionality reduction methods such as traditional PCA, RPCA-OM, and also some recently published methods such as IFPCA-1 and IFPCA-2.