论文标题

训练深神经网络的块层分解方案

Block Layer Decomposition schemes for training Deep Neural Networks

论文作者

Palagi, Laura, Seccia, Ruggiero

论文摘要

深度馈电神经网络(DFNNS)权重估计取决于一个非常大的非凸优化问题的解决方案,该问题可能具有许多局部(无全球)最小化器,鞍点和大型高原。结果,可以将优化算法吸引到局部最小化器中,这可能会导致不良解决方案或可以减慢优化过程。此外,找到训练问题的良好解决方案所需的时间取决于样本数量和变量数量。在这项工作中,我们展示了如何应用块坐标下降(BCD)方法来通过避免不良的固定点和平坦区域来提高最新算法的性能。我们首先描述了一个批处理BCD方法ABLE,以有效地应对网络的深度,然后我们进一步扩展了提出\ textIt {minibatch} BCD框架的算法,可以通过将BCD方法嵌入到Minibatch框架中。通过在几个架构网络的标准数据集上进行的大量数值结果,我们展示了BCD方法在DFNNS的训练阶段的应用如何允许超过标准批次批次和Minibatch算法,从而改善了训练阶段和网络的概括性能。

Deep Feedforward Neural Networks' (DFNNs) weights estimation relies on the solution of a very large nonconvex optimization problem that may have many local (no global) minimizers, saddle points and large plateaus. As a consequence, optimization algorithms can be attracted toward local minimizers which can lead to bad solutions or can slow down the optimization process. Furthermore, the time needed to find good solutions to the training problem depends on both the number of samples and the number of variables. In this work, we show how Block Coordinate Descent (BCD) methods can be applied to improve performance of state-of-the-art algorithms by avoiding bad stationary points and flat regions. We first describe a batch BCD method ables to effectively tackle the network's depth and then we further extend the algorithm proposing a \textit{minibatch} BCD framework able to scale with respect to both the number of variables and the number of samples by embedding a BCD approach into a minibatch framework. By extensive numerical results on standard datasets for several architecture networks, we show how the application of BCD methods to the training phase of DFNNs permits to outperform standard batch and minibatch algorithms leading to an improvement on both the training phase and the generalization performance of the networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源