论文标题

通过量化全球模型更新的联合学习

Federated Learning With Quantized Global Model Updates

论文作者

Amiri, Mohammad Mohammadi, Gunduz, Deniz, Kulkarni, Sanjeev R., Poor, H. Vincent

论文摘要

我们研究联合学习(FL),它使移动设备能够利用其本地数据集在中央服务器的帮助下协作训练全球模型,同时保持数据本地化。在每次迭代中,服务器将当前的全局模型广播到本地培训的设备,并汇总了设备的本地模型更新以更新全局模型。先前关于FL沟通效率的工作主要集中在设备的模型更新集合上,假设全球模型的广播完美。在本文中,我们考虑广播全局模型的压缩版本。这是为了进一步降低FL的通信成本,当将全球模型通过无线介质传输时,这可能特别有限。我们引入了有损耗的FL(LFL)算法,其中全局模型和本地模型更新在传输之前均已量化。我们分析了所提出的LFL算法的收敛行为,假设服务器上有准确的本地模型更新。数值实验表明,拟议的LFL方案量化了全局模型更新(相对于设备的全局模型估计),而不是全局模型本身,它显着胜过其他现有的方案,这些方案在PS-to-towevice方向上研究了全局模型的量化。同样,与完全无损的方法相比,所提出的方案的性能损失是微不足道的。

We study federated learning (FL), which enables mobile devices to utilize their local datasets to collaboratively train a global model with the help of a central server, while keeping data localized. At each iteration, the server broadcasts the current global model to the devices for local training, and aggregates the local model updates from the devices to update the global model. Previous work on the communication efficiency of FL has mainly focused on the aggregation of model updates from the devices, assuming perfect broadcasting of the global model. In this paper, we instead consider broadcasting a compressed version of the global model. This is to further reduce the communication cost of FL, which can be particularly limited when the global model is to be transmitted over a wireless medium. We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted. We analyze the convergence behavior of the proposed LFL algorithm assuming the availability of accurate local model updates at the server. Numerical experiments show that the proposed LFL scheme, which quantizes the global model update (with respect to the global model estimate at the devices) rather than the global model itself, significantly outperforms other existing schemes studying quantization of the global model at the PS-to-device direction. Also, the performance loss of the proposed scheme is marginal compared to the fully lossless approach, where the PS and the devices transmit their messages entirely without any quantization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源