论文标题

双采样随机平滑

Double Sampling Randomized Smoothing

论文作者

Li, Linyi, Zhang, Jiawei, Xie, Tao, Li, Bo

论文摘要

众所周知,神经网络(NNS)对对抗性扰动很容易受到攻击,因此有一系列旨在为NNS提供稳健性认证的工作,例如随机平滑性,这些噪声从某个分布中取样从某个分布中取出的噪声以证明具有平滑分类器的稳健性。但是,如先前的工作所示,随机平滑的认证鲁棒半径从缩放到大数据集(“维度的诅咒”)。为了克服这一障碍,我们提出了一个双重抽样随机平滑(DSR)框架,该框架利用了采样概率从额外的平滑分布来拧紧先前平滑分类器的稳健性认证。从理论上讲,在温和的假设下,我们证明DSR可以证明$θ(\ sqrt d)$ robust radius在$ \ ell_2 $ norm下,其中$ d $是输入维度,这意味着DSR可能能够破坏随机平滑的维度的诅咒。我们将DSR实例化,以考虑到采样误差的自定义双重优化,提出了一种基于自定义双重优化的高效和声音计算方法。关于MNIST,CIFAR-10和Imagenet的广泛实验验证了我们的理论,并表明DSR与在不同的设置下始终如一地证明了比现有基线更大的鲁棒半径。代码可在https://github.com/llylly/dsrs上找到。

Neural networks (NNs) are known to be vulnerable against adversarial perturbations, and thus there is a line of work aiming to provide robustness certification for NNs, such as randomized smoothing, which samples smoothing noises from a certain distribution to certify the robustness for a smoothed classifier. However, as shown by previous work, the certified robust radius in randomized smoothing suffers from scaling to large datasets ("curse of dimensionality"). To overcome this hurdle, we propose a Double Sampling Randomized Smoothing (DSRS) framework, which exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier. Theoretically, under mild assumptions, we prove that DSRS can certify $Θ(\sqrt d)$ robust radius under $\ell_2$ norm where $d$ is the input dimension, implying that DSRS may be able to break the curse of dimensionality of randomized smoothing. We instantiate DSRS for a generalized family of Gaussian smoothing and propose an efficient and sound computing method based on customized dual optimization considering sampling error. Extensive experiments on MNIST, CIFAR-10, and ImageNet verify our theory and show that DSRS certifies larger robust radii than existing baselines consistently under different settings. Code is available at https://github.com/llylly/DSRS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源