论文标题
RL-Distprivacy:隐私意识分配了低延迟物联网系统的深度推断
RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems
论文作者
论文摘要
尽管深层神经网络(DNN)已成为多个无处不在的应用程序的骨干技术,但它们在资源受限的机器中的部署(例如物联网(IoT)设备)仍然具有挑战性。为了满足这种范式的资源要求,引入了与IoT协同作用的深入推断。但是,DNN网络的分布遭受严重的数据泄漏。已经提出了各种威胁,包括黑盒攻击,恶意参与者可以恢复送入其设备中的任意输入。尽管许多对策旨在实现隐私的DNN,但其中大多数会导致其他计算和较低的精度。在本文中,我们提出了一种方法,该方法通过重新考虑分配策略而无需牺牲模型性能来针对协作深度推断的安全性。特别是,我们检查了不同的DNN分区,这些分区使该模型容易受到黑盒威胁的影响,并得出了应分配每个设备以隐藏原始输入的专有权的数据量。我们将这种方法制定为一种优化,在该方法中,我们在共同推导的延迟与隐私级别的数据级别之间建立了权衡。接下来,为了放大最佳解决方案,我们将方法塑造为支持异质设备以及多个DNN/数据集的增强学习(RL)设计。
Although Deep Neural Networks (DNN) have become the backbone technology of several ubiquitous applications, their deployment in resource-constrained machines, e.g., Internet of Things (IoT) devices, is still challenging. To satisfy the resource requirements of such a paradigm, collaborative deep inference with IoT synergy was introduced. However, the distribution of DNN networks suffers from severe data leakage. Various threats have been presented, including black-box attacks, where malicious participants can recover arbitrary inputs fed into their devices. Although many countermeasures were designed to achieve privacy-preserving DNN, most of them result in additional computation and lower accuracy. In this paper, we present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance. Particularly, we examine different DNN partitions that make the model susceptible to black-box threats and we derive the amount of data that should be allocated per device to hide proprieties of the original input. We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data. Next, to relax the optimal solution, we shape our approach as a Reinforcement Learning (RL) design that supports heterogeneous devices as well as multiple DNNs/datasets.