论文标题

对内部和外部分布的对抗性鲁棒性可改善解释性

Adversarial Robustness on In- and Out-Distribution Improves Explainability

论文作者

Augustin, Maximilian, Meinke, Alexander, Hein, Matthias

论文摘要

神经网络已导致图像分类的重大改善,但遭受了对对抗性变化的不可舒服,对近分分发样本的不可靠的不确定性估计及其难以理解的黑盒决策。在这项工作中,我们提出的比率是通过对对抗和外部分布的对抗性培训进行鲁棒性的培训程序,这导致了对额外分布的可靠置信度估算的健壮模型。比率具有与对抗训练相似的生成特性,因此视觉反事实会产生特定于类的特征。尽管对抗性训练以较低的清洁准确性的价格进行,但比率达到了最先进的$ L_2 $ - 对抗性的CIFAR10,并保持更好的清洁精度。

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_2$-adversarial robustness on CIFAR10 and maintains better clean accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源