论文标题

私人和沟通高效的边缘学习:稀疏的差分高斯掩蔽分布式SGD方法

Private and Communication-Efficient Edge Learning: A Sparse Differential Gaussian-Masking Distributed SGD Approach

论文作者

Zhang, Xin, Fang, Minghong, Liu, Jia, Zhu, Zhengyuan

论文摘要

随着机器学习的兴起(ML)和智能移动设备的扩散,近年来见证了对在无线边缘网络中执行ML的兴趣激增。在本文中,我们考虑了共同提高数据隐私和分布式边缘学习的通信效率的问题,这两者都是无线边缘网络计算中的关键性能指标。为此,我们提出了一种新的分散的随机梯度方法,该方法具有稀疏的差分高斯掩盖随机梯度(SDM-DSGD),用于非凸线分布式边缘学习。我们的主要贡献是三个方面:i)我们从理论上建立了我们的SDM-DSGD方法的隐私和通信效率绩效保证,这表现优于所有现有作品; ii)我们表明,与最先进的艺术相比,SDM-DSGD通过{\ em两个数量级}提高了基本的培训私人权衡权衡。 iii)我们揭示了理论见解,并为隐私保护和沟通效率之间的相互作用提供了实用的设计指南,这是两个相互矛盾的绩效目标。我们对MNIST和CIFAR-10数据集的各种学习模型进行了广泛的实验,以验证我们的理论发现。总的来说,我们的结果有助于分布式边缘学习的理论和算法设计。

With rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing. Toward this end, we propose a new decentralized stochastic gradient method with sparse differential Gaussian-masked stochastic gradients (SDM-DSGD) for non-convex distributed edge learning. Our main contributions are three-fold: i) We theoretically establish the privacy and communication efficiency performance guarantee of our SDM-DSGD method, which outperforms all existing works; ii) We show that SDM-DSGD improves the fundamental training-privacy trade-off by {\em two orders of magnitude} compared with the state-of-the-art. iii) We reveal theoretical insights and offer practical design guidelines for the interactions between privacy preservation and communication efficiency, two conflicting performance goals. We conduct extensive experiments with a variety of learning models on MNIST and CIFAR-10 datasets to verify our theoretical findings. Collectively, our results contribute to the theory and algorithm design for distributed edge learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源