论文标题

在Lipschitz上使用Toeplitz Matrix理论对卷积层的正则化

On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory

论文作者

Araujo, Alexandre, Negrevergne, Benjamin, Chevaleyre, Yann, Atif, Jamal

论文摘要

本文解决了Lipschitz正规化卷积神经网络的问题。 Lipschitz的规律性现在被确定为现代深度学习的关键特性,对训练稳定性,对对抗性示例的概括,鲁棒性等的影响。但是,计算神经网络的Lipschitz常数的确切值已知NP-HARD是NP-HARD。文献的最新尝试引入了上限,以近似该常数,该常数要么有效,但宽松或准确但在计算上昂贵。在这项工作中,通过利用Toeplitz矩阵的理论,我们引入了卷积层的新上限,既紧密又易于计算。基于这个结果,我们设计了一种算法来训练Lipschitz正规卷积神经网络。

This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks. Lipschitz regularity is now established as a key property of modern deep learning with implications in training stability, generalization, robustness against adversarial examples, etc. However, computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard. Recent attempts from the literature introduce upper bounds to approximate this constant that are either efficient but loose or accurate but computationally expensive. In this work, by leveraging the theory of Toeplitz matrices, we introduce a new upper bound for convolutional layers that is both tight and easy to compute. Based on this result we devise an algorithm to train Lipschitz regularized Convolutional Neural Networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源