论文标题

拜占庭人还可以从历史上学习:联邦学习中的集中剪裁的堕落

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

论文作者

Ozfatura, Kerem, Ozfatura, Emre, Kupcu, Alptekin, Gunduz, Deniz

论文摘要

由于其在广泛的协作学习任务中的成功,联邦学习框架(FL)框架的日益普及也引起了某些安全问题。在许多漏洞中,拜占庭式攻击的风险特别关注,这是指恶意客户参与学习过程的可能性。因此,FL的关键目标是中和拜占庭攻击的潜在影响,并确保最终模型是可信的。已经观察到,客户端模型/更新之间的差异越高,隐藏拜占庭式攻击的空间越多。结果,通过利用动量,从而减少方差,可以削弱已知拜占庭攻击的强度。居中的剪辑(CC)框架进一步表明,除了降低方差外,还可以用作中和拜占庭式攻击的参考点。在这项工作中,我们首先暴露了CC框架的漏洞,并引入了一种新颖的攻击策略,该策略可以规避CC和其他健壮聚合器的防御能力,并在图像分类任务中的最佳情况下将其测试准确性最高为%33。然后,我们提出了一种新的强大而快速的防御机制,该机制对拟议和其他现有的拜占庭袭击有效。

The increasing popularity of the federated learning (FL) framework due to its success in a wide range of collaborative learning tasks also induces certain security concerns. Among many vulnerabilities, the risk of Byzantine attacks is of particular concern, which refers to the possibility of malicious clients participating in the learning process. Hence, a crucial objective in FL is to neutralize the potential impact of Byzantine attacks and to ensure that the final model is trustable. It has been observed that the higher the variance among the clients' models/updates, the more space there is for Byzantine attacks to be hidden. As a consequence, by utilizing momentum, and thus, reducing the variance, it is possible to weaken the strength of known Byzantine attacks. The centered clipping (CC) framework has further shown that the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better. In this work, we first expose vulnerabilities of the CC framework, and introduce a novel attack strategy that can circumvent the defences of CC and other robust aggregators and reduce their test accuracy up to %33 on best-case scenarios in image classification tasks. Then, we propose a new robust and fast defence mechanism that is effective against the proposed and other existing Byzantine attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源