论文标题

TORCHFL:用于自举的联合学习实验的表演库

TorchFL: A Performant Library for Bootstrapping Federated Learning Experiments

论文作者

Khimani, Vivek, Jabbari, Shahin

论文摘要

随着围绕数据隐私的立法的增加,联合学习(FL)已成为一种有前途的技术,它使客户(最终用户)可以在不传输和存储数据中的第三方中心式服务器中进行协作训练深度学习(DL)模型。我们介绍了(i)引导FL实验的表演库Torchfl,(ii)使用各种硬件加速器执行它们,(iii)对性能进行分析,以及(iv)在GO中记录整体和特定于特定的结果。 Torchfl是在使用Pytorch和Lightning的自下而上设计上建造的,为模型,数据集和FL算法提供了现成的抽象,同时允许开发人员在需要时自定义它们。本文旨在深入研究Torchfl的建筑和设计,并详细介绍如何使研究人员启动联合学习经验,并为此提供实验和代码片段。随着最先进的DL模型,数据集和联合学习支持的现成实施,Torchfl旨在允许几乎没有工程背景的研究人员来设置FL实验,以最少的编码和基础架构在头顶上。

With the increased legislation around data privacy, federated learning (FL) has emerged as a promising technique that allows the clients (end-user) to collaboratively train deep learning (DL) models without transferring and storing the data in a centralized, third-party server. We introduce TorchFL, a performant library for (i) bootstrapping the FL experiments, (ii) executing them using various hardware accelerators, (iii) profiling the performance, and (iv) logging the overall and agent-specific results on the go. Being built on a bottom-up design using PyTorch and Lightning, TorchFL provides ready-to-use abstractions for models, datasets, and FL algorithms, while allowing the developers to customize them as and when required. This paper aims to dig deeper into the architecture and design of TorchFL, elaborate on how it allows researchers to bootstrap the federated learning experience, and provide experiments and code snippets for the same. With the ready-to-use implementation of state-of-the-art DL models, datasets, and federated learning support, TorchFL aims to allow researchers with little to no engineering background to set up FL experiments with minimal coding and infrastructure overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源