论文标题

分层联合学习的时间最小化

Time Minimization in Hierarchical Federated Learning

论文作者

Liu, Chang, Chua, Terence Jie, Zhao, Jun

论文摘要

联合学习是一种现代的分散机器学习技术,用户设备在本地执行机器学习任务,然后将模型参数上传到中央服务器。在本文中,我们考虑了一个三层分层联合学习系统,该系统涉及云和边缘服务器之间的模型参数交换,以及边缘服务器和用户设备。在分层联合学习模型中,模型参数的通信和计算延迟对实现预定义的全球模型准确性有很大的影响。因此,我们通过优化本地迭代计数和边缘迭代计数来制定联合学习和交流优化问题,以最大程度地减少模型参数通信和计算延迟。为了解决该问题,提出了一种迭代算法。之后,在减少系统的最大潜伏期的情况下,介绍了时间最少的UE到边缘关联算法。仿真结果表明,在最佳边缘服务器和本地迭代计数下,全局模型收敛速度更快。通过拟议的UE到边缘协会策略,将分层联合学习潜伏期最小化。

Federated Learning is a modern decentralized machine learning technique where user equipments perform machine learning tasks locally and then upload the model parameters to a central server. In this paper, we consider a 3-layer hierarchical federated learning system which involves model parameter exchanges between the cloud and edge servers, and the edge servers and user equipment. In a hierarchical federated learning model, delay in communication and computation of model parameters has a great impact on achieving a predefined global model accuracy. Therefore, we formulate a joint learning and communication optimization problem to minimize total model parameter communication and computation delay, by optimizing local iteration counts and edge iteration counts. To solve the problem, an iterative algorithm is proposed. After that, a time-minimized UE-to-edge association algorithm is presented where the maximum latency of the system is reduced. Simulation results show that the global model converges faster under optimal edge server and local iteration counts. The hierarchical federated learning latency is minimized with the proposed UE-to-edge association strategy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源