论文标题

个性化联合学习的实证研究

An Empirical Study of Personalized Federated Learning

论文作者

Matsuda, Koji, Sasaki, Yuya, Xiao, Chuan, Onizuka, Makoto

论文摘要

联合学习是一种分布式的机器学习方法,其中单个服务器和多个客户端协作构建机器学习模型,而无需在客户端上共享数据集。联合学习的一个挑战性问题是数据异质性(即,数据分布在客户端可能有所不同)。为了解决这个问题,众多联合学习方法旨在为个性化的联合学习,并为客户建立优化的模型。尽管现有的研究通过经验评估了自己的方法,但这些研究中的实验环境(例如比较方法,数据集和客户设置)彼此不同,尚不清楚哪种个性化联邦学习方法可以实现最佳性能,以及通过使用这些标准方法(即非个人化学习)来取得多少进展。在本文中,我们通过全面的实验对现有个性化联合学习的表现进行基准测试,以评估每种方法的特征。我们的实验研究表明,(1)没有冠军方法,(2)大数据异质性通常会导致高准确的预测,并且(3)具有微调的标准联合学习方法(例如FedAvg)通常超过了个性化的联邦学习方法。我们为研究人员开放基准工具FedBench,以进行各种实验环境进行实验研究。

Federated learning is a distributed machine learning approach in which a single server and multiple clients collaboratively build machine learning models without sharing datasets on clients. A challenging issue of federated learning is data heterogeneity (i.e., data distributions may differ across clients). To cope with this issue, numerous federated learning methods aim at personalized federated learning and build optimized models for clients. Whereas existing studies empirically evaluated their own methods, the experimental settings (e.g., comparison methods, datasets, and client setting) in these studies differ from each other, and it is unclear which personalized federate learning method achieves the best performance and how much progress can be made by using these methods instead of standard (i.e., non-personalized) federated learning. In this paper, we benchmark the performance of existing personalized federated learning through comprehensive experiments to evaluate the characteristics of each method. Our experimental study shows that (1) there are no champion methods, (2) large data heterogeneity often leads to high accurate predictions, and (3) standard federated learning methods (e.g. FedAvg) with fine-tuning often outperform personalized federated learning methods. We open our benchmark tool FedBench for researchers to conduct experimental studies with various experimental settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源