论文标题

使用计时侧渠道评估深神经网络中用户隐私

On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

论文作者

Shukla, Shubhi, Alam, Manaar, Bhattacharya, Sarani, Mukhopadhyay, Debdeep, Mitra, Pabitra

论文摘要

在解决复杂的现实世界任务方面的最新深度学习(DL)进步已导致其在实际应用中广泛采用。但是,这个机会带有重要的潜在风险,因为许多这些模型都依靠对隐私敏感的数据进行各种应用程序的培训,这使它们成为侵犯隐私的过度暴露威胁表面。此外,基于云的机器学习-AS-A-Service(MLAAS)在其强大的基础架构支持方面的广泛使用扩大了威胁表面,以包括各种远程侧渠道攻击。在本文中,我们首先在DL实现中识别并报告了一个新颖的数据依赖性计时侧通道泄漏(称为类泄漏),该实现源自广泛使用的DL Framework Pytorch中的非恒定时间分支操作。我们进一步展示了一个实用的推理时间攻击,其中具有用户特权和硬标签黑盒访问MLAA的对手可以利用类泄漏来损害MLAAS用户的隐私。 DL模型容易受到会员推理攻击(MIA)的攻击,其中对手的目标是推断在训练模型时是否使用过任何特定数据。在本文中,作为一个单独的案例研究,我们证明了具有差异隐私保护的DL模型(对MIA的流行对策)仍然容易受到MIA的攻击,而不是针对对手剥削班级泄漏的对手。我们通过进行恒定的分支操作来减轻班级泄漏并有助于减轻MIA,从而开发出易于实现的对策。我们选择了两个标准的基准图像分类数据集CIFAR-10和CIFAR-100来训练五个最先进的预训练的DL模型,这是在具有Intel Xeon和Intel I7处理器的两个不同的计算环境中,以验证我们的方法。

Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widespread adoption in practical applications. However, this opportunity comes with significant underlying risks, as many of these models rely on privacy-sensitive data for training in a variety of applications, making them an overly-exposed threat surface for privacy violations. Furthermore, the widespread use of cloud-based Machine-Learning-as-a-Service (MLaaS) for its robust infrastructure support has broadened the threat surface to include a variety of remote side-channel attacks. In this paper, we first identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in DL implementations originating from non-constant time branching operation in a widely used DL framework PyTorch. We further demonstrate a practical inference-time attack where an adversary with user privilege and hard-label black-box access to an MLaaS can exploit Class Leakage to compromise the privacy of MLaaS users. DL models are vulnerable to Membership Inference Attack (MIA), where an adversary's objective is to deduce whether any particular data has been used while training the model. In this paper, as a separate case study, we demonstrate that a DL model secured with differential privacy (a popular countermeasure against MIA) is still vulnerable to MIA against an adversary exploiting Class Leakage. We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage and also aids in mitigating MIA. We have chosen two standard benchmarking image classification datasets, CIFAR-10 and CIFAR-100 to train five state-of-the-art pre-trained DL models, over two different computing environments having Intel Xeon and Intel i7 processors to validate our approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源