论文标题
使用交叉模态域适应的未标记腹部MRI中的脂肪组织分割
Adipose Tissue Segmentation in Unlabeled Abdomen MRI using Cross Modality Domain Adaptation
论文作者
论文摘要
腹部脂肪定量至关重要,因为多个重要器官位于该区域内。尽管计算机断层扫描(CT)是对分段体内脂肪的高度敏感方式,但它涉及电离辐射,这使得磁共振成像(MRI)是为此目的而成为可取的替代方案。此外,MRI中的优质软组织对比可能会导致更准确的结果。然而,在MRI扫描中分割脂肪是高度劳动量的。在这项研究中,我们提出了一种基于深度学习技术的算法,以通过交叉模态适应从MR图像自动量化脂肪组织。我们的方法不需要对MR扫描的监督标记,而是我们利用一个循环生成的对抗网络(C-GAN)来构建管道,该管道将现有的MR扫描转换为其等效的合成CT(S-CT)图像,其中脂肪分段相对较容易,因为Hu(Hounsfield单元)在CT Image中具有相对更容易的情况。 MRI扫描的脂肪分割结果由专家放射科医生评估。对我们的分割结果的定性评估显示,MR图像中内脏和皮下脂肪分割的平均成功评分为3.80/5和4.54/5。
Abdominal fat quantification is critical since multiple vital organs are located within this region. Although computed tomography (CT) is a highly sensitive modality to segment body fat, it involves ionizing radiations which makes magnetic resonance imaging (MRI) a preferable alternative for this purpose. Additionally, the superior soft tissue contrast in MRI could lead to more accurate results. Yet, it is highly labor intensive to segment fat in MRI scans. In this study, we propose an algorithm based on deep learning technique(s) to automatically quantify fat tissue from MR images through a cross modality adaptation. Our method does not require supervised labeling of MR scans, instead, we utilize a cycle generative adversarial network (C-GAN) to construct a pipeline that transforms the existing MR scans into their equivalent synthetic CT (s-CT) images where fat segmentation is relatively easier due to the descriptive nature of HU (hounsfield unit) in CT images. The fat segmentation results for MRI scans were evaluated by expert radiologist. Qualitative evaluation of our segmentation results shows average success score of 3.80/5 and 4.54/5 for visceral and subcutaneous fat segmentation in MR images.