论文标题
增强拉格朗日对抗攻击
Augmented Lagrangian Adversarial Attacks
论文作者
论文摘要
对抗性攻击算法以惩罚方法为主,这些方法在实践中很慢,或更有效的距离燃烧方法,这些方法是针对所考虑的距离的性质量身定制的。我们提出了一种白盒攻击算法,以基于增强的拉格朗日原则生成最小的逆向对抗示例。我们带来了几种算法修改,对性能具有至关重要的影响。我们的攻击符合罚款方法的普遍性和距离量化算法的计算效率,并且可以轻松地用于一组距离。我们将攻击与三个数据集和几个模型上的最新方法进行比较,并始终获得具有相似或较低计算复杂性的竞争性能。
Adversarial attack algorithms are dominated by penalty methods, which are slow in practice, or more efficient distance-customized methods, which are heavily tailored to the properties of the distance considered. We propose a white-box attack algorithm to generate minimally perturbed adversarial examples based on Augmented Lagrangian principles. We bring several algorithmic modifications, which have a crucial effect on performance. Our attack enjoys the generality of penalty methods and the computational efficiency of distance-customized algorithms, and can be readily used for a wide set of distances. We compare our attack to state-of-the-art methods on three datasets and several models, and consistently obtain competitive performances with similar or lower computational complexity.