论文标题

快速有效的有条件学习,以在准确性和鲁棒性之间进行可调整的权衡

A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness

论文作者

Kundu, Souvik, Sundaresan, Sairam, Pedram, Massoud, Beerel, Peter A.

论文摘要

在清洁和对抗扰动的图像上实现最新性能(SOTA)性能的现有模型依赖于以特征线性调制(膜)层调节的卷积操作。这些层需要许多新的参数,并且对高参数敏感。它们大大增加了培训时间,记忆成本和潜在的潜伏期,这对于资源有限或实时应用程序可能会造成昂贵。在本文中,我们提出了一项快速可学习的曾经是对对抗性训练(FLOAT)算法,而不是现有的基于电影的条件,而是提供了独特的重量条件学习,与标准对抗性训练相比,参数计数,训练时间或网络延迟无需显着增加,从而不会显着增加。特别是,我们将可配置的缩放噪声添加到重量张量中,从而在清洁和对抗性性能之间取决于权衡。广泛的实验表明,Float可以产生SOTA性能,将清洁和扰动的图像分类分别提高高达〜6%和〜10%。此外,实际硬件测量表明,与基于电影的替代方案相比,在ISO-HyperParameter设置上,浮动可以将训练时间降低1.43倍,较少的模型参数可将训练时间降低1.47倍。此外,为了进一步提高记忆效率,我们引入了浮点稀疏(Floats),这是一种非详细模型修剪的形式,并提供了详细的经验分析,为这些新的有条件训练有素的训练有素训练的模型提供了三种准确的精确度 - 复杂性的权衡。

Existing models that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on convolution operations conditioned with feature-wise linear modulation (FiLM) layers. These layers require many new parameters and are hyperparameter sensitive. They significantly increase training time, memory cost, and potential latency which can prove costly for resource-limited or real-time applications. In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a trade-off between clean and adversarial performance. Extensive experiments show that FLOAT can yield SOTA performance improving both clean and perturbed image classification by up to ~6% and ~10%, respectively. Moreover, real hardware measurement shows that FLOAT can reduce the training time by up to 1.43x with fewer model parameters of up to 1.47x on iso-hyperparameter settings compared to the FiLM-based alternatives. Additionally, to further improve memory efficiency we introduce FLOAT sparse (FLOATS), a form of non-iterative model pruning and provide detailed empirical analysis to provide a three way accuracy-robustness-complexity trade-off for these new class of pruned conditionally trained models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源