论文标题

保护隐私XGBOOST推断

Privacy-Preserving XGBoost Inference

论文作者

Meng, Xianrui, Feigenbaum, Joan

论文摘要

尽管机器学习(ML)被广泛用于预测任务,但在某些重要情况下,无法使用ML或至少无法实现其全部潜力。采用的主要障碍是预测性查询的敏感性。单个用户可能缺乏足够丰富的数据集来在本地培训准确的模型,但也不愿意向这种模型的商业服务发送敏感的查询。隐私机器学习(PPML)的一个核心目标是使用户能够将加密查询提交远程ML服务,接收加密结果并在本地解密。我们旨在为真实世界保护ML推理问题开发实用解决方案。在本文中,我们提出了一种保护隐私的XGBOOST预测算法,我们已经在AWS SageMaker上实施并对其进行了经验评估。实验结果表明,我们的算法足够有效,可以在实际ML生产环境中使用。

Although machine learning (ML) is widely used for predictive tasks, there are important scenarios in which ML cannot be used or at least cannot achieve its full potential. A major barrier to adoption is the sensitive nature of predictive queries. Individual users may lack sufficiently rich datasets to train accurate models locally but also be unwilling to send sensitive queries to commercial services that vend such models. One central goal of privacy-preserving machine learning (PPML) is to enable users to submit encrypted queries to a remote ML service, receive encrypted results, and decrypt them locally. We aim at developing practical solutions for real-world privacy-preserving ML inference problems. In this paper, we propose a privacy-preserving XGBoost prediction algorithm, which we have implemented and evaluated empirically on AWS SageMaker. Experimental results indicate that our algorithm is efficient enough to be used in real ML production environments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源