论文标题

关于异质资源约束设备的分布式学习

Distributed Learning on Heterogeneous Resource-Constrained Devices

论文作者

Rapp, Martin, Khalili, Ramin, Henkel, Jörg

论文摘要

我们考虑一个分布式系统,该系统由一组异质设备组成,范围从低端到高端。这些设备具有不同的配置文件,例如不同的能源预算或不同的硬件规格,确定了它们执行某些学习任务的能力。我们提出了在这种异质系统中实现分布式学习的第一种方法。采用我们的方法,每个设备都采用了一个适合其功能的拓扑的神经网络(NN);但是,这些NN的一部分共享相同的拓扑,以便可以共同学习它们的参数。这不同于当前的方法,例如联合学习,这些方法要求所有设备都采用相同的NN,从而在可实现的准确性和培训的计算开销之间实现了权衡。我们评估了用于强化学习(RL)的异质分布式学习(RL),并观察到与当前的方法相比,它可以极大地提高更强大的设备上可实现的奖励,同时仍然在较弱的设备上保持高度奖励。我们还探索了有监督的学习,观察类似的收益。

We consider a distributed system, consisting of a heterogeneous set of devices, ranging from low-end to high-end. These devices have different profiles, e.g., different energy budgets, or different hardware specifications, determining their capabilities on performing certain learning tasks. We propose the first approach that enables distributed learning in such a heterogeneous system. Applying our approach, each device employs a neural network (NN) with a topology that fits its capabilities; however, part of these NNs share the same topology, so that their parameters can be jointly learned. This differs from current approaches, such as federated learning, which require all devices to employ the same NN, enforcing a trade-off between achievable accuracy and computational overhead of training. We evaluate heterogeneous distributed learning for reinforcement learning (RL) and observe that it greatly improves the achievable reward on more powerful devices, compared to current approaches, while still maintaining a high reward on the weaker devices. We also explore supervised learning, observing similar gains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源