论文标题

致力于强大的尖峰神经网络,以防止对抗性扰动

Toward Robust Spiking Neural Network Against Adversarial Perturbation

论文作者

Liang, Ling, Xu, Kaidi, Hu, Xing, Deng, Lei, Xie, Yuan

论文摘要

随着峰值神经网络(SNN)在现实世界中越来越多地部署到关键的应用程序中,SNN中的安全问题引起了更多的关注。目前,研究人员已经证明,SNN可以受到对抗性例子的攻击。如何建立强大的SNN成为一个紧迫的问题。最近,许多研究应用了人工神经网络(ANN)的认证培训,这些培训可以肯定地提高NN模型的鲁棒性。但是,由于SNN的独特神经元行为和输入格式,现有认证无法直接转移到SNN。在这项工作中,我们首先设计了S-IBP和S-Crown,以解决SNNS神经元建模中的非线性功能。然后,我们将数字和尖峰输入的边界形式化。最后,我们证明了我们在不同数据集和模型体系结构中提出的强大培训方法的效率。根据我们的实验,我们可以以$ 3.7 \%$ $ $原始准确性损失获得最高$ 37.7 \%$ $攻击错误。据我们所知,这是对SNN强大培训的首次分析。

As spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications, the security concerns in SNNs attract more attention. Currently, researchers have already demonstrated an SNN can be attacked with adversarial examples. How to build a robust SNN becomes an urgent issue. Recently, many studies apply certified training in artificial neural networks (ANNs), which can improve the robustness of an NN model promisely. However, existing certifications cannot transfer to SNNs directly because of the distinct neuron behavior and input formats for SNNs. In this work, we first design S-IBP and S-CROWN that tackle the non-linear functions in SNNs' neuron modeling. Then, we formalize the boundaries for both digital and spike inputs. Finally, we demonstrate the efficiency of our proposed robust training method in different datasets and model architectures. Based on our experiment, we can achieve a maximum $37.7\%$ attack error reduction with $3.7\%$ original accuracy loss. To the best of our knowledge, this is the first analysis on robust training of SNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源