论文标题

自适应差异过滤器,用于快速和沟通效率的联合学习

Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning

论文作者

Becking, Daniel, Kirchhoffer, Heiner, Tech, Gerhard, Haase, Paul, Müller, Karsten, Schwarz, Heiko, Samek, Wojciech

论文摘要

联合学习(FL)方案固有地通过经常在客户端和服务器之间传输神经网络更新来生成大型通信开销。为了最大程度地降低通信成本,将稀疏性与差分更新一起引入稀疏性是一种常用的技术。但是,稀疏模型更新可以降低收敛速度或无意间跳过某些更新方面(例如,学习的功能),如果错误积累未正确解决。在这项工作中,我们提出了一种以卷积过滤器的粒度运行的新扩展方法,1)弥补了FL过程中高度稀疏的更新,2)通过在过滤器空间中增强某些功能,而减小其他功能,而3)则在更新中激发额外的弹药,从而在更新中增强了额外的弹药,从而使较高的压缩比(即更高的压缩率)转移了较高的数据,请添加较高的数据。与未量化的更新和先前的工作相比,在单脉中对不同的计算机视觉任务(Pascal VOC,CIFAR10,胸部X射线)和神经网络(Resnets,Mobilenets,VGGS)进行的实验结果,双向和部分更新FLEF的FLEL表明,所提出的方法可改善中央服务器模型,同时将中央服务器的性能提高到越过7个trips 7 trips 7 trips 7 trips trime 7。

Federated learning (FL) scenarios inherently generate a large communication overhead by frequently transmitting neural network updates between clients and server. To minimize the communication cost, introducing sparsity in conjunction with differential updates is a commonly used technique. However, sparse model updates can slow down convergence speed or unintentionally skip certain update aspects, e.g., learned features, if error accumulation is not properly addressed. In this work, we propose a new scaling method operating at the granularity of convolutional filters which 1) compensates for highly sparse updates in FL processes, 2) adapts the local models to new data domains by enhancing some features in the filter space while diminishing others and 3) motivates extra sparsity in updates and thus achieves higher compression ratios, i.e., savings in the overall data transfer. Compared to unscaled updates and previous work, experimental results on different computer vision tasks (Pascal VOC, CIFAR10, Chest X-Ray) and neural networks (ResNets, MobileNets, VGGs) in uni-, bidirectional and partial update FL settings show that the proposed method improves the performance of the central server model while converging faster and reducing the total amount of transmitted data by up to 377 times.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源