论文标题

较低的难度和更好的鲁棒性:对抗训练的Bregman Divergence观点

Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training

论文作者

Wu, Zihui, Gao, Haichang, Zhou, Bingqian, Guo, Xiaoyan, Zhang, Shudong

论文摘要

在本文中,我们调查了通过减少优化的难度来改善对抗训练(AT)获得的对抗性鲁棒性。为了更好地研究这个问题,我们为AT建立了一个新颖的Bregman Divergence观点,其中可以将其视为负熵曲线上训练数据点的滑动过程。基于这个观点,我们分析了方法(即PGD-AT和Trades)的两个典型方法的学习目标,并且我们发现交易的优化过程比PGD-AT更容易,而PGD-AT则将PGD-AT分开。此外,我们讨论了熵在交易中的功能,我们发现具有高熵的模型可以是更好的鲁棒性学习者。受到上述发现的启发,我们提出了两种方法,即Fait和Mer,它们不仅可以减少在10步PGD对手下优化的难度,而且还可以提供更好的鲁棒性。我们的工作表明,在10步PGD对手下减少优化的难度是增强AT中对抗性鲁棒性的一种有前途的方法。

In this paper, we investigate on improving the adversarial robustness obtained in adversarial training (AT) via reducing the difficulty of optimization. To better study this problem, we build a novel Bregman divergence perspective for AT, in which AT can be viewed as the sliding process of the training data points on the negative entropy curve. Based on this perspective, we analyze the learning objectives of two typical AT methods, i.e., PGD-AT and TRADES, and we find that the optimization process of TRADES is easier than PGD-AT for that TRADES separates PGD-AT. In addition, we discuss the function of entropy in TRADES, and we find that models with high entropy can be better robustness learners. Inspired by the above findings, we propose two methods, i.e., FAIT and MER, which can both not only reduce the difficulty of optimization under the 10-step PGD adversaries, but also provide better robustness. Our work suggests that reducing the difficulty of optimization under the 10-step PGD adversaries is a promising approach for enhancing the adversarial robustness in AT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源