论文标题
使用新型的超盒选择规则,一般模糊的Min-Max神经网络的加速学习算法
Accelerated learning algorithms of general fuzzy min-max neural network using a novel hyperbox selection rule
论文作者
论文摘要
本文提出了一种加速一般模糊的最小神经网络训练过程的方法。目的是减少选择作为现有HAPLBOXES扩展步骤的潜在候选的不合适的杂物,以涵盖在线学习算法中的新输入模式或在集聚学习算法中HAPEX CONCRECATION过程的候选。我们提出的方法是基于数学公式,以形成一个分支结合的解决方案,旨在消除肯定无法满足扩展或聚集条件的输入,进而减少学习算法的训练时间。在许多广泛使用的数据集中评估了所提出的方法的效率。实验结果表明,在线学习算法和集聚算法的训练时间的显着减少。值得注意的是,在线学习算法的培训时间从1.2次减少到12倍,而拟议的方法平均将凝聚力学习算法从7次加速到37次。
This paper proposes a method to accelerate the training process of a general fuzzy min-max neural network. The purpose is to reduce the unsuitable hyperboxes selected as the potential candidates of the expansion step of existing hyperboxes to cover a new input pattern in the online learning algorithms or candidates of the hyperbox aggregation process in the agglomerative learning algorithms. Our proposed approach is based on the mathematical formulas to form a branch-and-bound solution aiming to remove the hyperboxes which are certain not to satisfy expansion or aggregation conditions, and in turn, decreasing the training time of learning algorithms. The efficiency of the proposed method is assessed over a number of widely used data sets. The experimental results indicated the significant decrease in training time of the proposed approach for both online and agglomerative learning algorithms. Notably, the training time of the online learning algorithms is reduced from 1.2 to 12 times when using the proposed method, while the agglomerative learning algorithms are accelerated from 7 to 37 times on average.