论文标题

Macer:通过最大化认证半径,无攻击和可扩展的鲁棒训练

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

论文作者

Zhai, Runtian, Dan, Chen, He, Di, Zhang, Huan, Gong, Boqing, Ravikumar, Pradeep, Hsieh, Cho-Jui, Wang, Liwei

论文摘要

对抗训练是学习强大模型的最流行方法之一,但通常依赖于攻击且时间昂贵。在本文中,我们提出了Macer算法,该算法在不使用对抗训练的情况下学习了强大的模型,但性能比所有现有的可证明的L2-Defenses都更好。最近的工作表明,随机平滑可用于为平滑分类器提供认证的L2半径,并且我们的算法列车通过最大化认证的半径(MACER),证明具有强大的平滑分类器。无攻击的特征使Macer更快地进行训练,并且更易于优化。在我们的实验中,我们表明我们的方法可以应用于包括CIFAR-10,Imagenet,Mnist和SVHN在内的广泛数据集上的现代深神经网络。对于所有任务,Macer的培训时间比最先进的对抗训练算法少,而学识渊博的模型达到了更大的平均认证半径。

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源