论文标题
通过非阻滞迷你批次加速平行随机梯度下降
Accelerating Parallel Stochastic Gradient Descent via Non-blocking Mini-batches
论文作者
论文摘要
SOTA分散的SGD算法可以通过使用RIND ALL-REDUCE(例如同步)来克服参数服务器处的带宽瓶颈。尽管分布式SGD中的参数更新可能异步发生,但仍然存在同步障碍,以确保每个学习者的本地培训时期都完成,然后学习者才能晋升为下一个时代。等待最慢的学习者(Stragglers)的延迟在这些最先进的分散框架的同步步骤中仍然是一个问题。在本文中,我们提出了(DE)集中式非阻滞SGD(非阻滞SGD),该SGD可以在异质环境中解决Straggler问题。非块SGD的主要思想是将原始批次分为迷你批次,然后累积梯度并根据成品迷你批次更新模型。可以使用分散的算法(包括RIND ALL-REDUCE,D-PSGD和MATTA)来解决非阻滞想法,以解决Straggler问题。此外,使用梯度积累来更新模型也可以保证收敛并避免梯度稳定性。还提出了带有随机散布延迟和设备的计算效率/吞吐量的运行时间分析,以显示非阻滞SGD的优势。在一组数据集和深度学习网络上进行的实验验证了理论分析,并证明非阻止SGD会加快训练并固定收敛性。与D-PSGD和MACHA(MACHA)的最先进的异步算法相比,在异质环境中,非障碍SGD的时间最多要少2倍。
SOTA decentralized SGD algorithms can overcome the bandwidth bottleneck at the parameter server by using communication collectives like Ring All-Reduce for synchronization. While the parameter updates in distributed SGD may happen asynchronously there is still a synchronization barrier to make sure that the local training epoch at every learner is complete before the learners can advance to the next epoch. The delays in waiting for the slowest learners(stragglers) remain to be a problem in the synchronization steps of these state-of-the-art decentralized frameworks. In this paper, we propose the (de)centralized Non-blocking SGD (Non-blocking SGD) which can address the straggler problem in a heterogeneous environment. The main idea of Non-blocking SGD is to split the original batch into mini-batches, then accumulate the gradients and update the model based on finished mini-batches. The Non-blocking idea can be implemented using decentralized algorithms including Ring All-reduce, D-PSGD, and MATCHA to solve the straggler problem. Moreover, using gradient accumulation to update the model also guarantees convergence and avoids gradient staleness. Run-time analysis with random straggler delays and computational efficiency/throughput of devices is also presented to show the advantage of Non-blocking SGD. Experiments on a suite of datasets and deep learning networks validate the theoretical analyses and demonstrate that Non-blocking SGD speeds up the training and fastens the convergence. Compared with the state-of-the-art decentralized asynchronous algorithms like D-PSGD and MACHA, Non-blocking SGD takes up to 2x fewer time to reach the same training loss in a heterogeneous environment.