论文标题

强大的深度学习作为最佳控制:洞察力和融合保证

Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

论文作者

Seidman, Jacob H., Fazlyab, Mahyar, Preciado, Victor M., Pappas, George J.

论文摘要

深度神经网络对对抗选择的输入的脆弱性促使需要重新审视深度学习算法。在训练中包括对抗性例子是反对攻击的一种流行防御机制。该机制可以作为最小最大优化问题,在该问题中,对手试图使用迭代的一阶算法最大化损失函数,而学习者则试图将其最小化。但是,以这种方式找到对抗性示例会导致训练期间过度的计算开销。通过将Min-Max问题解释为最佳控制问题,最近已经证明,在优化问题中可以利用神经网络的组成结构,以显着改善训练时间。在本文中,我们通过结合强大的最佳控制和不精确的甲骨文方法中的技术来提供对这种对抗性训练算法的首次收敛分析。我们的分析阐明了算法的超参数如何影响其稳定性和收敛性。我们通过实验可靠的分类问题来支持我们的见解。

The fragility of deep neural networks to adversarially-chosen inputs has motivated the need to revisit deep learning algorithms. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. This mechanism can be formulated as a min-max optimization problem, where the adversary seeks to maximize the loss function using an iterative first-order algorithm while the learner attempts to minimize it. However, finding adversarial examples in this way causes excessive computational overhead during training. By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly. In this paper, we provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact oracle methods in optimization. Our analysis sheds light on how the hyperparameters of the algorithm affect the its stability and convergence. We support our insights with experiments on a robust classification problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源