论文标题

异步平行和分布式优化的进步

Advances in Asynchronous Parallel and Distributed Optimization

论文作者

Assran, Mahmoud, Aytekin, Arda, Feyzmahdavian, Hamid, Johansson, Mikael, Rabbat, Michael

论文摘要

在机器学习的背景下引起的大规模优化问题的动机,在过去十年中,异步平行和分布式优化方法的研究取得了一些进步。异步方法不需要所有处理器来保持优化变量的一致视图。因此,它们通常可以比同步方法更有效地利用计算资源,并且它们对诸如Stragglers(即慢节点)和不可靠的通信链接之类的问题不敏感。异步方法的数学建模涉及对信息延迟的适当考虑,这使得他们的分析具有挑战性。本文回顾了异步优化方法的设计和分析中的最新发展,涵盖了这两种集中式方法,其中所有处理器都会更新优化变量的主副本和分散的方法,在该方法中,每个处理器都维护变量的本地副本。该分析提供了有关异步性程度如何影响收敛率的见解,尤其是在随机优化方法中。

Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade. Asynchronous methods do not require all processors to maintain a consistent view of the optimization variables. Consequently, they generally can make more efficient use of computational resources than synchronous methods, and they are not sensitive to issues like stragglers (i.e., slow nodes) and unreliable communication links. Mathematical modeling of asynchronous methods involves proper accounting of information delays, which makes their analysis challenging. This article reviews recent developments in the design and analysis of asynchronous optimization methods, covering both centralized methods, where all processors update a master copy of the optimization variables, and decentralized methods, where each processor maintains a local copy of the variables. The analysis provides insights as to how the degree of asynchrony impacts convergence rates, especially in stochastic optimization methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源