论文标题

神经网络心电图模型的可解释分解

Interpretable Factorization for Neural Network ECG Models

论文作者

Snyder, Christopher, Vishwanath, Sriram

论文摘要

深度学习(DL)改善医学实践及其临床结果的能力面临着迫在眉睫的障碍:模型解释。如果没有描述如何产生产出的情况,当模型的结论与他自己的结论相抵触时,也不能学会预测模型行为时,协作医师既无法解决。当前的研究旨在解释诊断ECG记录的网络,随着记录变得更加个性化和广泛部署,这具有很大的潜在影响。超越心电图的可推广影响在于能够为医学中的解释性技术提供丰富的测试床。然而,深层神经网络(DNN)的解释性技术本质上往往是启发式和观察性的,缺乏一个人在数学方程式分析中可能期望的数学严谨性。本文的动机是提供第三种选择,一种科学的方法。我们将模型输出本身视为一种现象,可以通过组件零件和方程来解释其行为。我们认为,这些组件部分也应该是“黑匣子” - 具有与原始功能明确的功能连接以明确的功能连接以启发性解释的目标。我们展示了如何严格地将DNN分为由黑匣子变量组成的层次方程。这不是分成物理部位的细分,例如有机体进入其细胞。它只是方程式的一种选择,将其分解为抽象函数的集合。但是,对于受过训练的DNN,可以在Physionet 2017挑战数据上识别正常的ECG波形,我们证明了此选择产生可解释的组件模型,该模型在相应的输入区域中使用ECG样品的视觉复合草图鉴定出来。此外,递归提炼了这种解释:组件黑匣子的附加分解对应于更纯净的ECG分区。

The ability of deep learning (DL) to improve the practice of medicine and its clinical outcomes faces a looming obstacle: model interpretation. Without description of how outputs are generated, a collaborating physician can neither resolve when the model's conclusions are in conflict with his or her own, nor learn to anticipate model behavior. Current research aims to interpret networks that diagnose ECG recordings, which has great potential impact as recordings become more personalized and widely deployed. A generalizable impact beyond ECGs lies in the ability to provide a rich test-bed for the development of interpretive techniques in medicine. Interpretive techniques for Deep Neural Networks (DNNs), however, tend to be heuristic and observational in nature, lacking the mathematical rigor one might expect in the analysis of math equations. The motivation of this paper is to offer a third option, a scientific approach. We treat the model output itself as a phenomenon to be explained through component parts and equations governing their behavior. We argue that these component parts should also be "black boxes" --additional targets to interpret heuristically with clear functional connection to the original. We show how to rigorously factor a DNN into a hierarchical equation consisting of black box variables. This is not a subdivision into physical parts, like an organism into its cells; it is but one choice of an equation into a collection of abstract functions. Yet, for DNNs trained to identify normal ECG waveforms on PhysioNet 2017 Challenge data, we demonstrate this choice yields interpretable component models identified with visual composite sketches of ECG samples in corresponding input regions. Moreover, the recursion distills this interpretation: additional factorization of component black boxes corresponds to ECG partitions that are more morphologically pure.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源