论文标题

通过神经振荡灵感的梯度掩蔽,对抗防御

Adversarial Defense via Neural Oscillation inspired Gradient Masking

论文作者

Jiang, Chunming, Zhang, Yilei

论文摘要

尖峰神经网络(SNN)由于低功耗,低潜伏期和生物学上的合理性而引起了极大的关注。由于它们被广泛部署在用于低功率大脑启发的计算的神经形态设备中,因此安全问题变得越来越重要。但是,与深神经网络(DNN)相比,SNN当前缺乏针对对抗攻击的专门设计的防御方法。受神经膜电位振荡的启发,我们提出了一种新型神经模型,该模型结合了生物启发的振荡机制,以增强SNN的安全性。我们的实验表明,具有神经振荡神经元的SNN比具有LIF神经元的普通SNN在类型的架构和数据集上具有更好的抵抗力。此外,我们提出了一种防御方法,该方法通过替换振荡的形式来改变模型的梯度,该振荡掩盖了原始训练梯度,并使攻击者混淆使用“假”神经元的梯度来生成无效的对抗样本。我们的实验表明,所提出的防御方法可以有效抵抗单步和迭代攻击,具有可比的防御效果,计算成本要比对DNN的对抗训练方法少得多。据我们所知,这是通过掩盖SNN上的代孕梯度来建立对抗性防御的第一部作品。

Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源