论文标题

收紧神经网络鲁棒性认证的凸松弛

Tightened Convex Relaxations for Neural Network Robustness Certification

论文作者

Anderson, Brendon G., Ma, Ziye, Li, Jingqi, Sojoudi, Somayeh

论文摘要

在本文中,我们考虑了证明神经网络在扰动和对抗性输入数据方面的鲁棒性的问题。这种认证对于在安全至关重要的决策和控制系统中应用神经网络至关重要。已经提出了使用凸优化的认证技术,但它们通常会遭受证书失效的放松错误。我们的工作利用了Relu网络的结构,通过基于分区的新认证程序来改善放松错误。事实证明,所提出的方法可以收紧现有的线性编程松弛,并且随着分区的细分,渐近地达到了零放松误差。我们开发了一个有限分区,该分区达到零放松误差,并使用结果来得出可拖动的分区方案,从而最大程度地减少了最坏情况的放松误差。使用真实数据的实验表明,在先前方法失败的情况下,分区过程能够发布鲁棒性证书。因此,发现基于分区的认证程序提供了一种直观,有效且理论上合理的方法,用于收紧现有的凸放松技术。

In this paper, we consider the problem of certifying the robustness of neural networks to perturbed and adversarial input data. Such certification is imperative for the application of neural networks in safety-critical decision-making and control systems. Certification techniques using convex optimization have been proposed, but they often suffer from relaxation errors that void the certificate. Our work exploits the structure of ReLU networks to improve relaxation errors through a novel partition-based certification procedure. The proposed method is proven to tighten existing linear programming relaxations, and asymptotically achieves zero relaxation error as the partition is made finer. We develop a finite partition that attains zero relaxation error and use the result to derive a tractable partitioning scheme that minimizes the worst-case relaxation error. Experiments using real data show that the partitioning procedure is able to issue robustness certificates in cases where prior methods fail. Consequently, partition-based certification procedures are found to provide an intuitive, effective, and theoretically justified method for tightening existing convex relaxation techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源