论文标题
通过乳腺摄影和超声检查联合神经分析自动乳房病变分类
Automatic Breast Lesion Classification by Joint Neural Analysis of Mammography and Ultrasound
论文作者
论文摘要
放射科医生广泛使用乳房X线摄影和超声波作为互补方式,以在乳腺癌诊断中获得更好的表现。但是,现有的计算机辅助诊断(CAD)系统通常基于单一模式。在这项工作中,我们提出了一种基于深度学习的方法,用于从各自的乳房X线摄影和超声图像中分类乳腺癌病变。我们介绍了各种方法,并在使用两种方式时都表现出一致的性能提高。所提出的方法基于Googlenet架构,在两个培训步骤中对我们的数据进行了微调。首先,针对每种方式分别训练了一个独特的神经网络,从而产生高级特征。然后,源自每种模式的汇总特征用于训练多模式网络以提供最终分类。在定量实验中,所提出的方法达到的AUC为0.94,表现优于通过单个模式训练的最先进模型。此外,它的性能与普通放射科医生类似,超过了参加读者研究的四分之二的放射科医生。有希望的结果表明,所提出的方法可能成为乳房放射学家的宝贵决策支持工具。
Mammography and ultrasound are extensively used by radiologists as complementary modalities to achieve better performance in breast cancer diagnosis. However, existing computer-aided diagnosis (CAD) systems for the breast are generally based on a single modality. In this work, we propose a deep-learning based method for classifying breast cancer lesions from their respective mammography and ultrasound images. We present various approaches and show a consistent improvement in performance when utilizing both modalities. The proposed approach is based on a GoogleNet architecture, fine-tuned for our data in two training steps. First, a distinct neural network is trained separately for each modality, generating high-level features. Then, the aggregated features originating from each modality are used to train a multimodal network to provide the final classification. In quantitative experiments, the proposed approach achieves an AUC of 0.94, outperforming state-of-the-art models trained over a single modality. Moreover, it performs similarly to an average radiologist, surpassing two out of four radiologists participating in a reader study. The promising results suggest that the proposed method may become a valuable decision support tool for breast radiologists.