论文标题

涡轮聚集:在安全联合学习中打破二次聚集屏障

Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

论文作者

So, Jinhyun, Guler, Basak, Avestimehr, A. Salman

论文摘要

联合学习是一个分布式框架,用于培训机器学习模型,同时保护位于移动设备的数据,同时保护单个用户的隐私。将联合学习扩展到大量用户的主要瓶颈是许多用户的安全模型聚合的开销。特别是,用于安全模型聚合的最新协议的开销与用户数量二次增长。在本文中,我们提出了第一个名为Turbo-Aggregate的安全汇总框架,该框架在具有$ N $用户的网络中实现了$ O(n \ log {n})$的安全聚合开销,而不是$ O(n^2)$,同时可以容忍50美元的用户辍学率。 Turbo-Agregate采用多组循环策略来有效地组合,并利用添加秘密共享和新颖的编码技术来注入聚合冗余,以处理用户辍学,同时保证用户隐私。我们通过实验表明,涡轮聚集体达到的总运行时间几乎在用户数量中生长,并且在最先进的协议上提供了高达$ n = 200 $用户的最高$ 40 \ times $加速。我们的实验还证明了模型大小和带宽对涡轮凝集酸盐性能的影响。

Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with $N$ users achieves a secure aggregation overhead of $O(N\log{N})$, as opposed to $O(N^2)$, while tolerating up to a user dropout rate of $50\%$. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $40\times$ speedup over the state-of-the-art protocols with up to $N=200$ users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源