论文标题

使用可解释的机器学习,改善了在Ictal Intertical Injery Continuum上的EEG模式分类中的临床医生表现

Improving Clinician Performance in Classification of EEG Patterns on the Ictal-Interictal-Injury Continuum using Interpretable Machine Learning

论文作者

Barnett, Alina Jade, Guo, Zhicheng, Jing, Jin, Ge, Wendong, Kaplan, Peter W., Kong, Wan Yee, Karakis, Ioannis, Herlopian, Aline, Jayagopal, Lakshman Arcot, Taraschenko, Olga, Selioutski, Olga, Osman, Gamaleldin, Goldenholz, Daniel, Rudin, Cynthia, Westover, M. Brandon

论文摘要

在重症监护病房(ICU)中,用脑电图(EEG)监测重症患者,以防止严重的脑损伤。可以监测的患者数量受到训练有素的医生阅读脑电图的限制,而脑电图解释可能是主观的,并且容易出现观察者间的变异性。脑电图的自动深度学习系统可以减少人类偏见并加速诊断过程。但是,黑匣子深度学习模型是不可信的,难以解决问题,并且在现实世界中缺乏责任感,从而导致临床医生缺乏信任和收养。为了应对这些挑战,我们提出了一种新颖的可解释的深度学习模型,该模型不仅可以预测有害的脑电波模式的存在,而且还提供了对其决策的高质量基于案例的解释。尽管被限制为可解释,但我们的模型的性能比相应的黑匣子模型更好。学到的2D嵌入式空间提供了第一个全球概述,概述了Ictal Intertical Injerjury Continuum脑电波模式的结构。了解我们的模型如何做出决定的能力不仅可以帮助临床医生更准确地诊断和治疗有害的大脑活动,而且还可以提高其在临床实践中对机器学习模型的信任和采用;这可能是ICU神经科医生标准工作流程的组成部分。

In intensive care units (ICUs), critically ill patients are monitored with electroencephalograms (EEGs) to prevent serious brain injury. The number of patients who can be monitored is constrained by the availability of trained physicians to read EEGs, and EEG interpretation can be subjective and prone to inter-observer variability. Automated deep learning systems for EEG could reduce human bias and accelerate the diagnostic process. However, black box deep learning models are untrustworthy, difficult to troubleshoot, and lack accountability in real-world applications, leading to a lack of trust and adoption by clinicians. To address these challenges, we propose a novel interpretable deep learning model that not only predicts the presence of harmful brainwave patterns but also provides high-quality case-based explanations of its decisions. Our model performs better than the corresponding black box model, despite being constrained to be interpretable. The learned 2D embedded space provides the first global overview of the structure of ictal-interictal-injury continuum brainwave patterns. The ability to understand how our model arrived at its decisions will not only help clinicians to diagnose and treat harmful brain activities more accurately but also increase their trust and adoption of machine learning models in clinical practice; this could be an integral component of the ICU neurologists' standard workflow.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源