论文标题
传感器徒手3D超声重建通过深层上下文学习
Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning
论文作者
论文摘要
经直肠超声(US)是指导前列腺活检的最常用的成像方式,其3D体积提供了更丰富的上下文信息。徒手US扫描的3D音量重建方法的当前方法需要外部跟踪设备,以提供每个帧的空间位置。在本文中,我们提出了一个深层的上下文学习网络(DCL-NET),该网络可以有效利用美国框架和重建3D US之间的图像特征关系,而无需任何跟踪设备。拟议的DCL-NET利用了美国视频段上的3D卷积进行特征提取。一个嵌入式的自我发场模块使网络专注于斑点丰富的区域,以更好地进行空间运动预测。我们还提出了一种新颖的案例相关性损失,以稳定训练过程以提高准确性。使用开发的方法获得了高度有希望的结果。通过消融研究的实验表明,通过与其他最新方法进行比较,表明了该方法的出色性能。这项工作的源代码可在https://github.com/dial-rpi/freehandusrecon上公开获得。
Transrectal ultrasound (US) is the most commonly used imaging modality to guide prostate biopsy and its 3D volume provides even richer context information. Current methods for 3D volume reconstruction from freehand US scans require external tracking devices to provide spatial position for every frame. In this paper, we propose a deep contextual learning network (DCL-Net), which can efficiently exploit the image feature relationship between US frames and reconstruct 3D US volumes without any tracking device. The proposed DCL-Net utilizes 3D convolutions over a US video segment for feature extraction. An embedded self-attention module makes the network focus on the speckle-rich areas for better spatial movement prediction. We also propose a novel case-wise correlation loss to stabilize the training process for improved accuracy. Highly promising results have been obtained by using the developed method. The experiments with ablation studies demonstrate superior performance of the proposed method by comparing against other state-of-the-art methods. Source code of this work is publicly available at https://github.com/DIAL-RPI/FreehandUSRecon.