论文标题

通过模型批准的可靠性可靠性更强

Measurably Stronger Explanation Reliability via Model Canonization

论文作者

Motzkus, Franz, Weber, Leander, Lapuschkin, Sebastian

论文摘要

尽管已证明基于规则的归因方法对于为深度神经网络提供局部解释有用,但解释现代且多样化的网络体系结构在产生值得信赖的解释方面产生了新的挑战,因为既定的规则集可能不足以或适用于新颖的网络结构。作为上述问题的优雅解决方案,最近引入了网络典范。此过程利用基于规则的属性的实现依赖性,并将模型重组成功能相同的替代设计等效,可以应用建立的归因规则。但是,到目前为止,仅定性地探索了批判性及其实用性的想法。在这项工作中,我们定量验证网络义务对具有批处理层的VGG-16和RESNET18模型的基于规则的属性的有益效应,从而扩展了当前的最佳实践,以获得可靠的神经网络解释。

While rule-based attribution methods have proven useful for providing local explanations for Deep Neural Networks, explaining modern and more varied network architectures yields new challenges in generating trustworthy explanations, since the established rule sets might not be sufficient or applicable to novel network structures. As an elegant solution to the above issue, network canonization has recently been introduced. This procedure leverages the implementation-dependency of rule-based attributions and restructures a model into a functionally identical equivalent of alternative design to which established attribution rules can be applied. However, the idea of canonization and its usefulness have so far only been explored qualitatively. In this work, we quantitatively verify the beneficial effects of network canonization to rule-based attributions on VGG-16 and ResNet18 models with BatchNorm layers and thus extend the current best practices for obtaining reliable neural network explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源