论文标题
错误的防御:深度神经网络鲁棒性的近似计算
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks
论文作者
论文摘要
机器学习架构,例如卷积神经网络(CNN)容易受到对抗性攻击的攻击:精心设计的输入以迫使系统输出到错误的标签。由于机器学习是在安全至关重要和对安全敏感的域中部署的,因此此类攻击可能会带来灾难性的安全和安全后果。在本文中,我们首次提出使用硬件支持的近似计算来改善机器学习分类器的鲁棒性。我们表明,针对确切的分类器的成功对抗性攻击对近似实施的转移性差。令人惊讶的是,鲁棒性优势也适用于攻击者无限制地访问近似分类器实现的白盒攻击:在这种情况下,我们表明需要更高水平的对抗性噪声来产生对抗性示例。此外,我们的近似计算模型在分类准确性方面保持了相同的水平,不需要重新培训,并降低了CNN的资源利用率和能源消耗。我们对一系列强烈的对抗攻击进行了广泛的实验。我们从经验上表明,拟议的实施增加了LENET-5,ALEXNET和VGG-11 CNN的鲁棒性,由于近似逻辑的简单性质,可节省50%的副产品能源消耗。
Machine-learning architectures, such as Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks: inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine-learning classifiers. We show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: in this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5, Alexnet and VGG-11 CNNs considerably with up to 50% by-product saving in energy consumption due to the simpler nature of the approximate logic.