论文标题
SIFU:连续的知情联盟未学习,以进行有效且可证明的客户在联合优化中学习
SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization
论文作者
论文摘要
机器学习(MU)是机器学习安全方面越来越重要的主题,旨在从培训程序中删除给定数据点的贡献。联合的未学习(FU)包括扩展MU,从联合培训的常规中删除给定客户的贡献。尽管已经提出了几种FU方法,但我们目前缺乏一种通用方法,可以为FedAvg例程提供正式的未学习保证,同时确保超出客户端损失功能的凸面假设之外的可伸缩性和概括。我们的目的是通过提出SIFU(顺序告知联盟的未学习)来填补这一空白,这是一种适用于凸和非凸优化制度的新FU方法。 SIFU自然适用于FedAvg,而无需为客户提供额外的计算成本,并为未学习任务的质量提供了正式的保证。我们对SIFU的未学习特性提供了理论分析,与最先进的一系列未学习方法相比,实际上证明了其有效性。
Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. While several FU methods have been proposed, we currently lack a general approach providing formal unlearning guarantees to the FedAvg routine, while ensuring scalability and generalization beyond the convex assumption on the clients' loss functions. We aim at filling this gap by proposing SIFU (Sequential Informed Federated Unlearning), a new FU method applying to both convex and non-convex optimization regimes. SIFU naturally applies to FedAvg without additional computational cost for the clients and provides formal guarantees on the quality of the unlearning task. We provide a theoretical analysis of the unlearning properties of SIFU, and practically demonstrate its effectiveness as compared to a panel of unlearning methods from the state-of-the-art.