论文标题

具有自适应本地链接的有效完全分布的联合学习

Efficient Fully Distributed Federated Learning with Adaptive Local Links

论文作者

Georgatos, Evangelos, Mavrokefalidis, Christos, Berberidis, Kostas

论文摘要

如今,数据驱动,机器和深度学习方法已在各种复杂任务(包括图像分类和对象检测)以及各种应用领域(例如自动驾驶汽车,医学成像和无线通信)中提供了前所未有的性能。传统上,这种方法已与所涉及的数据集一起在独立设备上部署。最近,已经观察到了向所谓的边缘机器学习的转变,其中采用了集中式体系结构,该架构允许多个具有本地计算和存储资源的设备在集中式服务器的协助下进行协作。众所周知的联合学习方法能够通过仅允许与服务器交换参数,同时将数据集保密到每个贡献设备。在这项工作中,我们提出了一种完全分布的基于扩散的学习算法,该算法不需要中央服务器,并为设备的合作提出了自适应组合规则。通过在MNIST数据集上采用分类任务,通过减少在非IID数据集方案中达到可接受的准确性水平所需的协作回合数量来证明所提出算法对相应对应物的效力。

Nowadays, data-driven, machine and deep learning approaches have provided unprecedented performance in various complex tasks, including image classification and object detection, and in a variety of application areas, like autonomous vehicles, medical imaging and wireless communications. Traditionally, such approaches have been deployed, along with the involved datasets, on standalone devices. Recently, a shift has been observed towards the so-called Edge Machine Learning, in which centralized architectures are adopted that allow multiple devices with local computational and storage resources to collaborate with the assistance of a centralized server. The well-known federated learning approach is able to utilize such architectures by allowing the exchange of only parameters with the server, while keeping the datasets private to each contributing device. In this work, we propose a fully distributed, diffusion-based learning algorithm that does not require a central server and propose an adaptive combination rule for the cooperation of the devices. By adopting a classification task on the MNIST dataset, the efficacy of the proposed algorithm over corresponding counterparts is demonstrated via the reduction of the number of collaboration rounds required to achieve an acceptable accuracy level in non- IID dataset scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源