论文标题

关于聚类联合学习的收敛

On the Convergence of Clustered Federated Learning

论文作者

Ma, Jie, Long, Guodong, Zhou, Tianyi, Jiang, Jing, Zhang, Chengqi

论文摘要

知识共享和模型个性化是应对联邦学习(FL)中非IID挑战的重要组成部分。大多数现有的FL方法侧重于两个极端:1)学习共享模型,以使用非IID数据为所有客户提供服务,以及2)为每个客户(即个性化fl)学习个性化模型。有一个权衡的解决方案,即群集或集群个性化的FL,旨在将相似的客户聚集到一个集群中,然后在集群中为所有客户学习共享模型。本文是通过将群集群集制定为可以统一现有方法的双层优化框架来重新审视群集的研究。我们提出了一个新的理论分析框架,以通过考虑客户之间的群集性来证明融合。此外,我们以一种算法(称为加权聚类联合学习(WECFL))体现了该框架。经验分析验证了理论结果,并证明了在提议的集群非IID设置下提出的WECFL的有效性。

Knowledge sharing and model personalization are essential components to tackle the non-IID challenge in federated learning (FL). Most existing FL methods focus on two extremes: 1) to learn a shared model to serve all clients with non-IID data, and 2) to learn personalized models for each client, namely personalized FL. There is a trade-off solution, namely clustered FL or cluster-wise personalized FL, which aims to cluster similar clients into one cluster, and then learn a shared model for all clients within a cluster. This paper is to revisit the research of clustered FL by formulating them into a bi-level optimization framework that could unify existing methods. We propose a new theoretical analysis framework to prove the convergence by considering the clusterability among clients. In addition, we embody this framework in an algorithm, named Weighted Clustered Federated Learning (WeCFL). Empirical analysis verifies the theoretical results and demonstrates the effectiveness of the proposed WeCFL under the proposed cluster-wise non-IID settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源