论文标题

Fedrn:利用可靠的邻居来实现强大的联盟学习

FedRN: Exploiting k-Reliable Neighbors Towards Robust Federated Learning

论文作者

Kim, SangMook, Shin, Wonyoung, Jang, Soohyuk, Song, Hwanjun, Yun, Se-Young

论文摘要

鲁棒性正成为联合学习的另一个重要挑战,因为每个客户的数据收集过程自然都伴有嘈杂的标签。但是,由于客户的数据异质性和噪音的不同程度,这加剧了客户到客户的性能差异,因此它更加复杂且具有挑战性。在这项工作中,我们提出了一种称为FedRn的强大联合学习方法,该方法利用具有高数据专业知识或相似性的K-可靠邻居。我们的方法仅通过一组选定的干净示例训练,通过其结合混合模型确定,有助于减轻低绩效客户端之间的差距。我们通过对三个现实世界或合成基准数据集进行广泛评估来证明FedRN的优势。与现有的强大训练方法相比,结果表明,在存在嘈杂标签的情况下,FedRN显着提高了测试准确性。

Robustness is becoming another important challenge of federated learning in that the data collection process in each client is naturally accompanied by noisy labels. However, it is far more complex and challenging owing to varying levels of data heterogeneity and noise over clients, which exacerbates the client-to-client performance discrepancy. In this work, we propose a robust federated learning method called FedRN, which exploits k-reliable neighbors with high data expertise or similarity. Our method helps mitigate the gap between low- and high-performance clients by training only with a selected set of clean examples, identified by their ensembled mixture models. We demonstrate the superiority of FedRN via extensive evaluations on three real-world or synthetic benchmark datasets. Compared with existing robust training methods, the results show that FedRN significantly improves the test accuracy in the presence of noisy labels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源