论文标题

通过不变的聚合和多样性转移来解决属性偏斜的联盟学习,以解决属性

Disentangled Federated Learning for Tackling Attributes Skew via Invariant Aggregation and Diversity Transferring

论文作者

Luo, Zhengquan, Wang, Yunlong, Wang, Zilei, Sun, Zhenan, Tan, Tieniu

论文摘要

属性偏斜阻碍了当前的联合学习(FL)框架,从客户之间的一致优化方向上,这不可避免地导致绩效降低和不稳定的融合。核心问题在于:1)特定于域的特定属性,这些属性是非毒药,只有局部有效的,被不可自发地混合到全局聚合中。 2)纠缠属性的一个阶段优化不能同时满足两个相互矛盾的目标,即概括和个性化。为了应对这些问题,我们提出了分解的联邦学习(DFL),以将域特异性和交叉不变的属性分解为两个互补分支,这些分支是由拟议的交替局部 - 全球优化培训的。重要的是,融合分析证明,即使不完整的客户模型参与了全局聚合,FL系统也可以稳定地融合,从而大大扩展了FL的应用程序范围。广泛的实验证明,与手动合成和现实属性偏斜数据集的SOTA FL方法相比,DFL以更高的性能,更好的可解释性和更快的收敛速率促进了FL。

Attributes skew hinders the current federated learning (FL) frameworks from consistent optimization directions among the clients, which inevitably leads to performance reduction and unstable convergence. The core problems lie in that: 1) Domain-specific attributes, which are non-causal and only locally valid, are indeliberately mixed into global aggregation. 2) The one-stage optimizations of entangled attributes cannot simultaneously satisfy two conflicting objectives, i.e., generalization and personalization. To cope with these, we proposed disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches, which are trained by the proposed alternating local-global optimization independently. Importantly, convergence analysis proves that the FL system can be stably converged even if incomplete client models participate in the global aggregation, which greatly expands the application scope of FL. Extensive experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods on both manually synthesized and realistic attributes skew datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源