论文标题

部分可观测时空混沌系统的无模型预测

Learning Robust Kernel Ensembles with Kernel Average Pooling

论文作者

Bashivan, Pouya, Ibrahim, Adam, Dehghani, Amirozhan, Ren, Yifei

论文摘要

长期以来,模型集合已在机器学习中使用,以减少单个模型预测的差异,从而使它们更强大地对输入扰动。诸如辍学之类的伪汇集方法也通常在深度学习模型中用于改善概括。但是,这些技术在改善神经网络对输入扰动的鲁棒性方面的应用仍未得到充满意。我们介绍了一个神经网络构建块内核平均池(KAP),该块沿层激活张量的内核尺寸应用平均过滤器。我们表明,具有相似功能的内核的集合自然出现在配备了KAP和经过反向传播的训练的卷积神经网络中。此外,我们表明,当对添加剂高斯噪声扰动的输入训练时,KAP模型在各种形式的对抗攻击方面非常强大。对CIFAR10,CIFAR100,Tinyimagenet和Imagenet数据集的经验评估显示出对强烈的对抗性攻击(例如自动攻击)的鲁棒性,而无需在任何对抗性示例上进行培训。

Model ensembles have long been used in machine learning to reduce the variance in individual model predictions, making them more robust to input perturbations. Pseudo-ensemble methods like dropout have also been commonly used in deep learning models to improve generalization. However, the application of these techniques to improve neural networks' robustness against input perturbations remains underexplored. We introduce Kernel Average Pooling (KAP), a neural network building block that applies the mean filter along the kernel dimension of the layer activation tensor. We show that ensembles of kernels with similar functionality naturally emerge in convolutional neural networks equipped with KAP and trained with backpropagation. Moreover, we show that when trained on inputs perturbed with additive Gaussian noise, KAP models are remarkably robust against various forms of adversarial attacks. Empirical evaluations on CIFAR10, CIFAR100, TinyImagenet, and Imagenet datasets show substantial improvements in robustness against strong adversarial attacks such as AutoAttack without training on any adversarial examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源