论文标题

从样品中证明神经网络的鲁棒性对随机输入噪声

Certifying Neural Network Robustness to Random Input Noise from Samples

论文作者

Anderson, Brendon G., Sojoudi, Somayeh

论文摘要

在存在输入不确定性的情况下,证明神经网络的鲁棒性的方法对于安全至关重要的环境至关重要。文献中的大多数认证方法都是为对抗输入不确定性而设计的,但是研究人员最近表明需要考虑随机不确定性的方法。在本文中,我们提出了一种新颖的鲁棒性认证方法,即当输入噪声遵循任意概率分布时,上限范围错误的概率。该界限被视为偶然受限的优化问题,然后使用输入输出样品重新重新重新重新重新构建以替换优化约束。通过分析解决方案,最终的优化减少了线性程序。此外,我们对以压倒性的概率进行错误分类绑定所需的样本数量发展了足够的条件。我们对MNIST分类器的案例研究表明,此方法能够证明一个统一的无限型不确定性区域,半径比当前最新方法可以证明的半径近50倍。

Methods to certify the robustness of neural networks in the presence of input uncertainty are vital in safety-critical settings. Most certification methods in the literature are designed for adversarial input uncertainty, but researchers have recently shown a need for methods that consider random uncertainty. In this paper, we propose a novel robustness certification method that upper bounds the probability of misclassification when the input noise follows an arbitrary probability distribution. This bound is cast as a chance-constrained optimization problem, which is then reformulated using input-output samples to replace the optimization constraints. The resulting optimization reduces to a linear program with an analytical solution. Furthermore, we develop a sufficient condition on the number of samples needed to make the misclassification bound hold with overwhelming probability. Our case studies on MNIST classifiers show that this method is able to certify a uniform infinity-norm uncertainty region with a radius of nearly 50 times larger than what the current state-of-the-art method can certify.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源