论文标题
添加剂MIL:本质上可解释的多个实例学习病理学
Additive MIL: Intrinsically Interpretable Multiple Instance Learning for Pathology
论文作者
论文摘要
多个实例学习(MIL)已被广泛用于解决关键问题,例如自动化癌症诊断和分级,预测患者预后和治疗反应。在临床环境中部署这些模型需要在开发和部署过程中仔细检查这些黑匣子,以识别失败并维持医师的信任。在这项工作中,我们提出了一个简单的MIL模型表述,该模型可以在保持相似的预测性能的同时可解释性。我们的添加剂MIL模型可以实现空间信用分配,从而可以精确地计算和可视化图像中每个区域的贡献。我们表明,我们的空间信用分配与病理学家在诊断过程中使用的区域一致,并改善了关注MIL模型的经典关注热图。我们表明,任何现有的MIL模型都可以通过简单的功能组合物进行增添效率。我们还展示了这些模型如何调试模型故障,确定虚假特征并突出关注的班级区域,从而使其在临床决策等高风险环境中使用。
Multiple Instance Learning (MIL) has been widely applied in pathology towards solving critical problems such as automating cancer diagnosis and grading, predicting patient prognosis, and therapy response. Deploying these models in a clinical setting requires careful inspection of these black boxes during development and deployment to identify failures and maintain physician trust. In this work, we propose a simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. We show that our spatial credit assignment coincides with regions used by pathologists during diagnosis and improves upon classical attention heatmaps from attention MIL models. We show that any existing MIL model can be made additive with a simple change in function composition. We also show how these models can debug model failures, identify spurious features, and highlight class-wise regions of interest, enabling their use in high-stakes environments such as clinical decision-making.