论文标题
SQSGD:本地私人和沟通有效的联合学习
sqSGD: Locally Private and Communication Efficient Federated Learning
论文作者
论文摘要
联合学习(FL)是一种从分散数据源训练机器学习模型的技术。我们根据当地的隐私约束概念研究FL,该概念通过在离开客户之前使数据混淆,为敏感数据披露提供了强烈的保护。我们确定了设计实用隐私的FL算法的两个主要问题:沟通效率和高维度的兼容性。然后,我们开发一种基于梯度的学习算法,称为\ emph {sqsgd}(选择性量化的随机梯度下降),以解决这两个问题。所提出的算法基于一种新型的隐私量化方案,该方案使用每个客户端每个维度的位数恒定数量。然后,我们通过三种方式改进基本算法:首先,我们采用梯度亚采样策略,同时在固定隐私预算下提供更好的培训性能和较小的沟通成本。其次,我们利用随机旋转作为预处理步骤来减少量化误差。第三,采用自适应梯度标准上限策略来提高准确性和稳定训练。最后,在基准数据集中证明了拟议框架的实用性。实验结果表明,SQSGD成功地学习了Lenet和Resnet等大型模型,并具有局部隐私约束。此外,凭借固定的隐私和通信水平,SQSGD的性能显着主导了各种基线算法。
Federated learning (FL) is a technique that trains machine learning models from decentralized data sources. We study FL under local notions of privacy constraints, which provides strong protection against sensitive data disclosures via obfuscating the data before leaving the client. We identify two major concerns in designing practical privacy-preserving FL algorithms: communication efficiency and high-dimensional compatibility. We then develop a gradient-based learning algorithm called \emph{sqSGD} (selective quantized stochastic gradient descent) that addresses both concerns. The proposed algorithm is based on a novel privacy-preserving quantization scheme that uses a constant number of bits per dimension per client. Then we improve the base algorithm in three ways: first, we apply a gradient subsampling strategy that simultaneously offers better training performance and smaller communication costs under a fixed privacy budget. Secondly, we utilize randomized rotation as a preprocessing step to reduce quantization error. Thirdly, an adaptive gradient norm upper bound shrinkage strategy is adopted to improve accuracy and stabilize training. Finally, the practicality of the proposed framework is demonstrated on benchmark datasets. Experiment results show that sqSGD successfully learns large models like LeNet and ResNet with local privacy constraints. In addition, with fixed privacy and communication level, the performance of sqSGD significantly dominates that of various baseline algorithms.