论文标题

亲和力特征加强精确,完整和健壮的容器分割

Affinity Feature Strengthening for Accurate, Complete and Robust Vessel Segmentation

论文作者

Shi, Tianyi, Ding, Xiaohuan, Zhou, Wei, Pan, Feng, Yan, Zengqiang, Bai, Xiang, Yang, Xin

论文摘要

在许多医学图像应用中,血管分割至关重要,例如检测冠状动脉狭窄,视网膜血管疾病和脑动脉瘤。但是,达到高像素的精度,完整的拓扑结构和各种对比变化的鲁棒性至关重要且具有挑战性,大多数现有方法仅着重于实现其中一两个方面。在本文中,我们提出了一种新颖的方法,即“亲和力”特征加强网络(AFN),该网络共同对几何形状进行建模并使用对比敏感的多尺度亲和力方法来改进像素的细分特征。具体而言,我们计算每个像素的多尺度亲和力字段,并在预测的掩码图像中捕获其与相邻像素的语义关系。该领域代表了不同尺寸的血管段的局部几何形状,使我们能够学习空间和尺度感知的自适应权重以增强血管特征。我们在四种不同类型的血管数据集上评估了AFN:X射线血管造影冠状动脉血管数据集(XCAD),门户静脉数据集(PV),数字减法血管造影脑血管造影脑血管血管数据集(DSA)和视网膜视网膜血管(DSA)和驱动器(驱动器)。广泛的实验结果表明,我们的AFN在更高的准确性和拓扑指标方面都优于最先进的方法,同时对各种对比度变化也更强大。该工作的源代码可在https://github.com/ty-shi/afn上获得。

Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms. However, achieving high pixel-wise accuracy, complete topology structure and robustness to various contrast variations are critical and challenging, and most existing methods focus only on achieving one or two of these aspects. In this paper, we present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach. Specifically, we compute a multiscale affinity field for each pixel, capturing its semantic relationships with neighboring pixels in the predicted mask image. This field represents the local geometry of vessel segments of different sizes, allowing us to learn spatial- and scale-aware adaptive weights to strengthen vessel features. We evaluate our AFN on four different types of vascular datasets: X-ray angiography coronary vessel dataset (XCAD), portal vein dataset (PV), digital subtraction angiography cerebrovascular vessel dataset (DSA) and retinal vessel dataset (DRIVE). Extensive experimental results demonstrate that our AFN outperforms the state-of-the-art methods in terms of both higher accuracy and topological metrics, while also being more robust to various contrast changes. The source code of this work is available at https://github.com/TY-Shi/AFN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源