论文标题

S-Addopt:在有向图上分散的随机一阶优化

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

论文作者

Qureshi, Muhammad I., Xin, Ran, Kar, Soummya, Khan, Usman A.

论文摘要

在本报告中,我们研究了分散的随机优化,当功能分布在有向的节点网络上时,可以最大程度地减少光滑且强大的凸成本函数。与现有工作相反,我们使用梯度跟踪来改善所得算法的某些方面。特别是,我们提出了〜\ textbf {\ texttt {s-addopt}}算法,该算法在每个节点上假定一个随机的一阶甲骨文,并表明对于恒定的步进尺寸〜$α$,每个节点都会在最佳的内部误差球上线性地收敛到最佳的误差球,该尺寸是$ $ a $ a $ a $ a $ a $ a $ a $。对于衰减的阶梯尺寸〜$ \ MATHCAL {o}(1/k)$,我们表明〜\ textbf {\ texttt {s-addopt}}在〜$ \ $ \ nathcal {o}(o}(1/k)$及其转换性网络上是差异的网络,以〜$ \ mathcal {o}的方式达到了精确的解决方案。因此,〜\ textbf {\ texttt {s-addopt}}的渐近行为与集中式随机梯度下降相当。强烈凸和非凸问题的数值实验说明了所提出算法的收敛行为和性能比较。

In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes. In contrast to the existing work, we use gradient tracking to improve certain aspects of the resulting algorithm. In particular, we propose the~\textbf{\texttt{S-ADDOPT}} algorithm that assumes a stochastic first-order oracle at each node and show that for a constant step-size~$α$, each node converges linearly inside an error ball around the optimal solution, the size of which is controlled by~$α$. For decaying step-sizes~$\mathcal{O}(1/k)$, we show that~\textbf{\texttt{S-ADDOPT}} reaches the exact solution sublinearly at~$\mathcal{O}(1/k)$ and its convergence is asymptotically network-independent. Thus the asymptotic behavior of~\textbf{\texttt{S-ADDOPT}} is comparable to the centralized stochastic gradient descent. Numerical experiments over both strongly convex and non-convex problems illustrate the convergence behavior and the performance comparison of the proposed algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源