论文标题
跨模式信息最大化医学成像:CMIM
Cross-Modal Information Maximization for Medical Imaging: CMIM
论文作者
论文摘要
在医院中,将数据孤立在特定的信息系统中,这些信息系统使患者经历的不同方式(例如不同的医学成像检查)(CT扫描,MRI,PET,超声等)及其相关的放射学报告。这为在火车时间获得和使用的独特机会提供了相同信息的多个视图,这些信息可能总是在测试时间内可用。 在本文中,我们提出了一个创新的框架,该框架通过学习多模式输入的良好表示,通过使用最大程度的相互信息最大化的最新进展,可以在测试时间下降。通过在火车时间最大化跨模式信息,我们能够在两个不同的设置中胜过几个最先进的基线,即医疗图像分类和细分。特别是,我们的方法显示出对较弱方式的推理时间性能有很大的影响。
In hospitals, data are siloed to specific information systems that make the same information available under different modalities such as the different medical imaging exams the patient undergoes (CT scans, MRI, PET, Ultrasound, etc.) and their associated radiology reports. This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time. In this paper, we propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time, using recent advances in mutual information maximization. By maximizing cross-modal information at train time, we are able to outperform several state-of-the-art baselines in two different settings, medical image classification, and segmentation. In particular, our method is shown to have a strong impact on the inference-time performance of weaker modalities.