论文标题

通过知识转移加速认证的鲁棒性培训

Accelerating Certified Robustness Training via Knowledge Transfer

论文作者

Vaishnavi, Pratik, Eykholt, Kevin, Rahmati, Amir

论文摘要

培训对对抗攻击的深度神经网络分类器,对于确保AI控制系统的安全性和可靠性至关重要。尽管已经开发出了许多最先进的认证培训方法,但相对于数据集和网络复杂性,它们在计算上的昂贵且规模较差。经过定期培训对于纳入新的数据和网络改进是必要的,这是必要的,这进一步阻碍了认证培训的广泛使用。在本文中,我们提出了认证的鲁棒性转移(CRT),这是一种通用框架,用于通过知识转移来减少任何确定性鲁棒培训方法的计算开销。鉴于一位强大的老师,我们的框架利用新颖的培训损失将教师的稳健性转移给学生。我们提供了CRT的理论和经验验证。我们在CIFAR-10上进行的实验表明,CRT在三个不同的建筑几代中平均$ 8 \ times $ $ $ 8 \ times速度,同时获得了与最先进方法的可比性。我们还表明,CRT可以扩展到Imagenet等大型数据集。

Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems. Although numerous state-of-the-art certified training methods have been developed, they are computationally expensive and scale poorly with respect to both dataset and network complexity. Widespread usage of certified training is further hindered by the fact that periodic retraining is necessary to incorporate new data and network improvements. In this paper, we propose Certified Robustness Transfer (CRT), a general-purpose framework for reducing the computational overhead of any certifiably robust training method through knowledge transfer. Given a robust teacher, our framework uses a novel training loss to transfer the teacher's robustness to the student. We provide theoretical and empirical validation of CRT. Our experiments on CIFAR-10 show that CRT speeds up certified robustness training by $8 \times$ on average across three different architecture generations while achieving comparable robustness to state-of-the-art methods. We also show that CRT can scale to large-scale datasets like ImageNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源