论文标题

功能网络:深层神经网络可解释性的新框架

Functional Network: A Novel Framework for Interpretability of Deep Neural Networks

论文作者

Zhang, Ben, Dong, Zhetong, Zhang, Junsong, Lin, Hongwei

论文摘要

深神经网络的分层结构阻碍了众多分析工具的使用,从而发展其可解释性。受功能大脑网络成功的启发,我们为深度神经网络(即功能网络)的解释性提供了一个新颖的框架。我们构建了完全连接的网络的功能网络,并探索其小世界。在我们的实验中,使用图理论分析和拓扑数据分析揭示了正则化方法的机制,即批准和辍学。我们的经验分析显示了以下内容:(1)批次归一化通过提高全局效率和环数来增强模型性能,但通过降低了容差来降低对抗性鲁棒性。 (2)辍学通过提高功能专业化和容错性来提高模型的概括和鲁棒性。 (3)具有不同正则化的模型可以根据其功能拓扑差异正确聚集,从而在功能网络和拓扑数据分析中占据了可解释性的巨大潜力。

The layered structure of deep neural networks hinders the use of numerous analysis tools and thus the development of its interpretability. Inspired by the success of functional brain networks, we propose a novel framework for interpretability of deep neural networks, that is, the functional network. We construct the functional network of fully connected networks and explore its small-worldness. In our experiments, the mechanisms of regularization methods, namely, batch normalization and dropout, are revealed using graph theoretical analysis and topological data analysis. Our empirical analysis shows the following: (1) Batch normalization enhances model performance by increasing the global e ciency and the number of loops but reduces adversarial robustness by lowering the fault tolerance. (2) Dropout improves generalization and robustness of models by improving the functional specialization and fault tolerance. (3) The models with dierent regularizations can be clustered correctly according to their functional topological dierences, re ecting the great potential of the functional network and topological data analysis in interpretability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源