论文标题
块切换:深度学习安全的随机方法
Block Switching: A Stochastic Approach for Deep Learning Security
论文作者
论文摘要
最近对对抗性攻击的研究揭示了现代深度学习模型的脆弱性。也就是说,对输入的精巧精心制作的扰动可以使训练有素的网络具有很高的准确性,从而产生了任意错误的预测,同时保持人类视力系统无法察觉。在本文中,我们介绍了块切换(BS),这是一种基于随机性的对抗攻击的防御策略。 BS用多个并行通道代替了模型层的块,并且在运行时间中随机分配了活动通道,因此无法预测到对手。我们从经验上表明,与其他随机防御剂(例如随机激活修剪(SAP))相比,BS导致更分散的输入梯度分布和出色的防御有效性。与其他防御能力相比,BS还具有以下特征:(i)BS导致测试精度下降较小; (ii)BS是与攻击无关的,(iii)BS与其他防御措施兼容,可以与他人共同使用。
Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.