论文标题
拉格朗日神经网络验证的分解
Lagrangian Decomposition for Neural Network Verification
论文作者
论文摘要
神经网络验证的基本组成部分是计算其输出所需的值的界限。以前的方法已经使用了现成的求解器,丢弃问题结构,或者进一步放松了问题,使得界限不必要地松动。我们提出了一种基于拉格朗日分解的新方法。我们的配方允许有效的超级生效上升算法以及改进的近端算法。两种算法都提供了三个优点:(i)它们产生的界限至少与以前的双重算法一样紧,依靠拉格朗日放松; (ii)它们基于类似于神经网络层的前进/向后通行的操作,因此很容易平行,可与GPU实施相提并论,并能够利用问题的卷积结构; (iii)他们允许任何时间停止,同时仍提供有效的界限。从经验上讲,我们表明我们获得的界限可在其运行时间的一小部分中与现成的求解器可比,并与以前的双重算法相同的时间获得更紧密的界限。当采用界限进行正式验证时,这会导致总体加速。我们的算法代码可在https://github.com/oval-group/decomposition-plnn-bounds上获得。
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take. Previous methods have either used off-the-shelf solvers, discarding the problem structure, or relaxed the problem even further, making the bounds unnecessarily loose. We propose a novel approach based on Lagrangian Decomposition. Our formulation admits an efficient supergradient ascent algorithm, as well as an improved proximal algorithm. Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds. Empirically, we show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time, and obtain tighter bounds in the same time as previous dual algorithms. This results in an overall speed-up when employing the bounds for formal verification. Code for our algorithms is available at https://github.com/oval-group/decomposition-plnn-bounds.