论文标题

提高大规模分散分布式培训的效率

Improving Efficiency in Large-Scale Decentralized Distributed Training

论文作者

Zhang, Wei, Cui, Xiaodong, Kayi, Abdullah, Liu, Mingrui, Finkler, Ulrich, Kingsbury, Brian, Saon, George, Mroueh, Youssef, Buyuktosunoglu, Alper, Das, Payel, Kung, David, Picheny, Michael

论文摘要

分散的平行SGD(D-PSGD)及其异步变体异步平行SGD(AD-PSGD)是一个分布式学习算法的家族,已被证明可以很好地在大规模的深度学习任务中表现出色。 (a)d-pSGD的一个缺点是,当系统中的学习者数量增加时,混合矩阵的光谱间隙会减少,这会阻碍收敛性。在本文中,我们通过改善频谱差距,同时最大程度地降低通信成本来调查(a)基于(a)基于D-PSGD的培训的技术。我们通过在2000小时的总机语音识别任务和ImageNet计算机视觉任务上运行实验来证明我们提出的技术的有效性。在IBM P9超级计算机上,我们的系统能够在2.28小时内训练LSTM声学型号,HUB5-2000打机板(SWB)测试集7.5%,在Callhome(CH)测试集上使用64 V100 gpus,在1.98时使用1.98 wer,使用7.7%的wer Enter of Swb和13.3%WER,在Callhome(CH)测试集上进行13%WER,并使用13%的wer ge, 迄今为止。

Decentralized Parallel SGD (D-PSGD) and its asynchronous variant Asynchronous Parallel SGD (AD-PSGD) is a family of distributed learning algorithms that have been demonstrated to perform well for large-scale deep learning tasks. One drawback of (A)D-PSGD is that the spectral gap of the mixing matrix decreases when the number of learners in the system increases, which hampers convergence. In this paper, we investigate techniques to accelerate (A)D-PSGD based training by improving the spectral gap while minimizing the communication cost. We demonstrate the effectiveness of our proposed techniques by running experiments on the 2000-hour Switchboard speech recognition task and the ImageNet computer vision task. On an IBM P9 supercomputer, our system is able to train an LSTM acoustic model in 2.28 hours with 7.5% WER on the Hub5-2000 Switchboard (SWB) test set and 13.3% WER on the CallHome (CH) test set using 64 V100 GPUs and in 1.98 hours with 7.7% WER on SWB and 13.3% WER on CH using 128 V100 GPUs, the fastest training time reported to date.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源