论文标题
$^3 $ dsegnet:解剖学意识的人工分离和细分网络,用于未配对的分割,减少伪影和模态翻译
A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation network for unpaired segmentation, artifact reduction, and modality translation
论文作者
论文摘要
脊柱手术计划需要在锥形束计算机断层扫描(CBCT)中自动分割椎骨,这是一种术中成像方式,广泛用于干预措施。但是,由于噪声,组织对比不良以及金属物体的存在,CBCT图像具有低质量和伪像的质量,导致椎骨分割,甚至手动,甚至是一项艰巨的任务。相比之下,存在着大量无文物的高质量CT图像,并带有椎骨注释。这激发了我们使用带有注释的未配对的CT图像构建CBCT椎骨细分模型。为了克服CBCT和CT之间的域和伪影差距,必须一起解决椎体分割,伪影减少和模态翻译的三个异质任务。为此,我们提出了一种新颖的解剖学解剖学分离和细分网络($^3 $ dsegnet),该网络强烈利用这三个任务的知识共享来促进学习。具体而言,它采用随机的CBCT和CT图像,作为输入,并通过从分离的潜在层中的不同解码组合来操纵合成和分割。然后,通过提出合成图像之间和分段椎骨之间的各种形式的一致性,可以实现学习,而无需配对(即解剖学上相同)数据。最后,我们将2D切片堆叠在一起,并在顶部构建3D网络以获得最终的3D分割结果。对大量临床CBCT(21,364)和CT(17,089)图像进行了广泛的实验表明,该图像$^3 $ dsegnet的性能明显优于针对每个任务独立培训的最先进的竞争方法,并且非常明显地,它可以为Untaireaired 3D cbct certette for Untair dd cbct certet pertet pertete trete trete trete trete trete for note tece s s sectation for n tem of the for tecement。
Spinal surgery planning necessitates automatic segmentation of vertebrae in cone-beam computed tomography (CBCT), an intraoperative imaging modality that is widely used in intervention. However, CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects, causing vertebra segmentation, even manually, a demanding task. In contrast, there exists a wealth of artifact-free, high quality CT images with vertebra annotations. This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations. To overcome the domain and artifact gaps between CBCT and CT, it is a must to address the three heterogeneous tasks of vertebra segmentation, artifact reduction and modality translation all together. To this, we propose a novel anatomy-aware artifact disentanglement and segmentation network (A$^3$DSegNet) that intensively leverages knowledge sharing of these three tasks to promote learning. Specifically, it takes a random pair of CBCT and CT images as the input and manipulates the synthesis and segmentation via different decoding combinations from the disentangled latent layers. Then, by proposing various forms of consistency among the synthesized images and among segmented vertebrae, the learning is achieved without paired (i.e., anatomically identical) data. Finally, we stack 2D slices together and build 3D networks on top to obtain final 3D segmentation result. Extensive experiments on a large number of clinical CBCT (21,364) and CT (17,089) images show that the proposed A$^3$DSegNet performs significantly better than state-of-the-art competing methods trained independently for each task and, remarkably, it achieves an average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.