论文标题

通过有序的ADMM在完全分散的环境中通过有序的ADMM进行沟通有效的联邦学习

Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting

论文作者

Chen, Yicheng, Blum, Rick S., Sadler, Brian M.

论文摘要

近年来,沟通效率分布式优化的挑战引起了人们的关注。在本文中,在一般完全分散的网络设置中设计了一种称为基于订购的交替方向方法(OADMM)的交流算法,工人只能与邻居交换消息。与经典的ADMM相比,OADMM的一个关键特征是,在每次迭代中的工人中都订购了传输,以使数据最有用的数据的工人首先将其本地变量广播给邻居,而尚未传播的邻居可以根据收到的传输来更新其本地变量。在OADMM中,如果他们当前的局部变量与以前传输的值没有足够的不同,我们禁止工人传输。提出了一个称为SOADMM的OADMM的变体,在订购传输的位置,但在每次迭代中的每个节点都不会停止传输。数值结果表明,与包括ADMM在内的现有算法相比,OADMM可以显着减少通信数量。我们还在数字上表明,与经典ADMM相比,SOADMM可以加速收敛,从而节省了通信。

The challenge of communication-efficient distributed optimization has attracted attention in recent years. In this paper, a communication efficient algorithm, called ordering-based alternating direction method of multipliers (OADMM) is devised in a general fully decentralized network setting where a worker can only exchange messages with neighbors. Compared to the classical ADMM, a key feature of OADMM is that transmissions are ordered among workers at each iteration such that a worker with the most informative data broadcasts its local variable to neighbors first, and neighbors who have not transmitted yet can update their local variables based on that received transmission. In OADMM, we prohibit workers from transmitting if their current local variables are not sufficiently different from their previously transmitted value. A variant of OADMM, called SOADMM, is proposed where transmissions are ordered but transmissions are never stopped for each node at each iteration. Numerical results demonstrate that given a targeted accuracy, OADMM can significantly reduce the number of communications compared to existing algorithms including ADMM. We also show numerically that SOADMM can accelerate convergence, resulting in communication savings compared to the classical ADMM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源