论文标题
子空间学习机(SLM):方法和性能
Subspace Learning Machine (SLM): Methodology and Performance
论文作者
论文摘要
在这项工作中提出了受馈电多层感知器(FF-MLP)(FF-MLP),决策树(DT)和Extreme Learning Machine(ELM)的启发,这是一种新的分类模型,称为子空间学习机(SLM)。 SLM首先通过检查每个输入功能的判别功率来标识一个判别子空间,$ s^0 $。然后,它使用$ s^0 $中功能的概率投影来产生1D子空间,并为每个子空间找到最佳分区。这相当于用超平面分配$ s^0 $。开发了一个标准,可以选择最佳的$ Q $分区,这些分区在其中产生$ 2Q $分区子空间。我们将$ s^0 $分配给决策树的根节点,$ 2Q $子空间的交叉点为深度为子节点。分区过程在每个子节点上递归应用以构建一个SLM树。当子节点的样品足够纯净时,分区过程会停止,并且每个叶子节点都会预测。这个想法可以推广到回归,从而导致子空间学习回归器(SLR)。此外,SLM/SLR树的集合可以产生更强的预测因子。进行了广泛的实验,以在SLM/SLR树,合奏和经典分类器/回归器之间进行性能基准测试。
Inspired by the feedforward multilayer perceptron (FF-MLP), decision tree (DT) and extreme learning machine (ELM), a new classification model, called the subspace learning machine (SLM), is proposed in this work. SLM first identifies a discriminant subspace, $S^0$, by examining the discriminant power of each input feature. Then, it uses probabilistic projections of features in $S^0$ to yield 1D subspaces and finds the optimal partition for each of them. This is equivalent to partitioning $S^0$ with hyperplanes. A criterion is developed to choose the best $q$ partitions that yield $2q$ partitioned subspaces among them. We assign $S^0$ to the root node of a decision tree and the intersections of $2q$ subspaces to its child nodes of depth one. The partitioning process is recursively applied at each child node to build an SLM tree. When the samples at a child node are sufficiently pure, the partitioning process stops and each leaf node makes a prediction. The idea can be generalized to regression, leading to the subspace learning regressor (SLR). Furthermore, ensembles of SLM/SLR trees can yield a stronger predictor. Extensive experiments are conducted for performance benchmarking among SLM/SLR trees, ensembles and classical classifiers/regressors.