论文标题

对抗性训练和可证明的鲁棒性:两个目标的故事

Adversarial Training and Provable Robustness: A Tale of Two Objectives

论文作者

Fan, Jiameng, Li, Wenchao

论文摘要

我们提出了一个原则上的框架,该框架结合了对抗性培训和可证明的鲁棒性验证,以确认培训可靠的神经网络。我们将训练问题提出作为一个联合优化问题,既有经验和可证明的鲁棒性目标,并开发了一种新颖的梯度散发技术,该技术可以消除随机多速度的偏见。我们对所提出的技术的收敛性和与最先进的实验比较进行了理论分析。 MNIST和CIFAR-10的结果表明,我们的方法可以始终如一地匹配或超越先验方法,以实现可证明的无穷大鲁棒性。值得注意的是,我们在Epsilon = 0.3时在MNIST上实现了6.60%的测试误差,而CIFAR-10的MNIST在Epsilon = 8/255上实现了66.57%。

We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can consistently match or outperform prior approaches for provable l infinity robustness. Notably, we achieve 6.60% verified test error on MNIST at epsilon = 0.3, and 66.57% on CIFAR-10 with epsilon = 8/255.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源