论文标题
通过Lan-Wan编排的等级联盟学习
Hierarchical Federated Learning through LAN-WAN Orchestration
论文作者
论文摘要
Federated学习(FL)旨在使手机能够协作学习全局模型,而无需将其私人数据上传到云服务器。但是,退出的FL协议在联合网络中具有关键的通信瓶颈,加上隐私问题,通常由广阔的网络(WAN)提供支持。这样的WAN驱动的FL设计导致成本明显高,并且模型收敛速度较慢。在这项工作中,我们提出了一种有效的FL协议,该方案涉及本地区域网络(LAN)中的分层聚合机制,因为其带宽丰富,而几乎可以忽略不计的货币成本。我们提出的FL可以加速学习过程,并在同一LAN中频繁地进行局部聚集,并在WAN云上频繁地进行局部聚合,并降低货币成本。我们进一步设计了一个具体的FL平台,即LANFL,该平台结合了几种关键技术,以应对LAN引入的这些挑战:云设备聚合体系结构,Intra Intra Intra-Lan peer-to-Peer-to-Peer(P2P)拓扑生成,Inter-Lan Bandwidth能力异质性。我们在2个典型的非IID数据集上评估了LANFL,这表明LANFL可以显着加速FL训练(1.5倍-6.0x),节省WAN流量(18.3x-75.6倍),并降低货币成本(3.8 x-27.2x),同时保留模型的准确性。
Federated learning (FL) was designed to enable mobile phones to collaboratively learn a global model without uploading their private data to a cloud server. However, exiting FL protocols has a critical communication bottleneck in a federated network coupled with privacy concerns, usually powered by a wide-area network (WAN). Such a WAN-driven FL design leads to significantly high cost and much slower model convergence. In this work, we propose an efficient FL protocol, which involves a hierarchical aggregation mechanism in the local-area network (LAN) due to its abundant bandwidth and almost negligible monetary cost than WAN. Our proposed FL can accelerate the learning process and reduce the monetary cost with frequent local aggregation in the same LAN and infrequent global aggregation on a cloud across WAN. We further design a concrete FL platform, namely LanFL, that incorporates several key techniques to handle those challenges introduced by LAN: cloud-device aggregation architecture, intra-LAN peer-to-peer (p2p) topology generation, inter-LAN bandwidth capacity heterogeneity. We evaluate LanFL on 2 typical Non-IID datasets, which reveals that LanFL can significantly accelerate FL training (1.5x-6.0x), save WAN traffic (18.3x-75.6x), and reduce monetary cost (3.8x-27.2x) while preserving the model accuracy.