论文标题

橄榄:关于受信任的执行环境的遗忘联邦学习,以疏忽的风险

OLIVE: Oblivious Federated Learning on Trusted Execution Environment against the risk of sparsification

论文作者

Kato, Fumiyuki, Cao, Yang, Yoshikawa, Masatoshi

论文摘要

将联邦学习(FL)与可信赖的执行环境(TEE)相结合是实现隐私权FL的一种有前途的方法,近年来,它引起了极大的学术关注。在服务器端实现TEE使每一轮FL可以进行,而无需将客户端的梯度信息公开为不受信任的服务器。这解决了现有的安全聚合方案中的可用性差距以及差异私有FL中的实用差距。但是,要使用TEE解决该问题,需要考虑使用服务器端的TEE的漏洞 - 在FL的背景下,这还没有得到充分研究。这项研究的主要技术贡献是分析FL和防御的T恤的脆弱性。首先,我们从理论上分析了内存访问模式的泄漏,揭示了稀疏梯度的风险,这些梯度通常在FL中用于提高通信效率和模型准确性。其次,我们设计了一种推理攻击,以将内存访问模式与培训数据集中的敏感信息联系起来。最后,我们提出了一种遗忘但有效的聚合算法,以防止内存访问模式泄漏。我们对现实世界数据的实验表明,所提出的方法在实际尺度上有效地发挥了作用。

Combining Federated Learning (FL) with a Trusted Execution Environment (TEE) is a promising approach for realizing privacy-preserving FL, which has garnered significant academic attention in recent years. Implementing the TEE on the server side enables each round of FL to proceed without exposing the client's gradient information to untrusted servers. This addresses usability gaps in existing secure aggregation schemes as well as utility gaps in differentially private FL. However, to address the issue using a TEE, the vulnerabilities of server-side TEEs need to be considered -- this has not been sufficiently investigated in the context of FL. The main technical contribution of this study is the analysis of the vulnerabilities of TEE in FL and the defense. First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients, which are commonly used in FL to enhance communication efficiency and model accuracy. Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset. Finally, we propose an oblivious yet efficient aggregation algorithm to prevent memory access pattern leakage. Our experiments on real-world data demonstrate that the proposed method functions efficiently in practical scales.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源