论文标题

对抗攻击下的过度参数线性回归

Overparameterized Linear Regression under Adversarial Attacks

论文作者

Ribeiro, Antônio H., Schön, Thomas B.

论文摘要

我们研究面对对抗攻击时线性回归的误差。在此框架中,对手将输入更改为回归模型,以最大程度地提高预测错误。在存在对手的情况下,我们在预测误差的情况下提供了界限,而在没有这种对手的情况下,误差和误差。我们展示了这些界限如何使用非对抗性设置的分析来研究对抗误差。获得的结果阐明了过度参数线性模型对对抗性攻击的鲁棒性。添加功能可能是额外的鲁棒性或脆弱性的来源。一方面,我们使用渐近结果来说明如何获得对抗误差的双重曲线。另一方面,随着添加更多功能,我们得出了对抗误差可以增长到无穷大的条件,而同时,测试误差为零。我们表明,这种行为是由于参数矢量的规范随特征的数量而增长而引起的。还可以确定,由于$ \ ell_1 $和$ \ ell_2 $ - 随机预测集中的$ \ ell_1 $和$ \ ell_2 $ - 对抗攻击的行为可能会有所不同。我们还展示了我们的重新印度如何将对抗性训练作为凸优化问题。然后利用这一事实来建立对抗训练和参数缩小方法之间的相似性,并研究训练如何影响估计模型的鲁棒性。

We study the error of linear regression in the face of adversarial attacks. In this framework, an adversary changes the input to the regression model in order to maximize the prediction error. We provide bounds on the prediction error in the presence of an adversary as a function of the parameter norm and the error in the absence of such an adversary. We show how these bounds make it possible to study the adversarial error using analysis from non-adversarial setups. The obtained results shed light on the robustness of overparameterized linear models to adversarial attacks. Adding features might be either a source of additional robustness or brittleness. On the one hand, we use asymptotic results to illustrate how double-descent curves can be obtained for the adversarial error. On the other hand, we derive conditions under which the adversarial error can grow to infinity as more features are added, while at the same time, the test error goes to zero. We show this behavior is caused by the fact that the norm of the parameter vector grows with the number of features. It is also established that $\ell_\infty$ and $\ell_2$-adversarial attacks might behave fundamentally differently due to how the $\ell_1$ and $\ell_2$-norms of random projections concentrate. We also show how our reformulation allows for solving adversarial training as a convex optimization problem. This fact is then exploited to establish similarities between adversarial training and parameter-shrinking methods and to study how the training might affect the robustness of the estimated models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源