论文标题

PUERT:CS-MRI的概率下采样和可解释的重建网络

PUERT: Probabilistic Under-sampling and Explicable Reconstruction Network for CS-MRI

论文作者

Xie, Jingfen, Zhang, Jian, Zhang, Yongbing, Ji, Xiangyang

论文摘要

压缩传感MRI(CS-MRI)旨在重建从子nyquist采样K空间数据中的去恶化图像以加速MR成像,从而提出两个基本问题,即在哪里进行样品以及如何重建。为了同时解决这两个问题,我们提出了一种新颖的端到端概率不足和可解释的重建网络(称为PUERT),以共同优化采样模式和重建网络。所提出的采样子网没有学习确定性掩码,而是探索了最佳的概率子采样模式,该模式描述了每个可能的采样点处的独立Bernoulli随机变量,从而保留了更可靠的CS重建的鲁棒性和随机性。进一步引入了动态梯度估计策略,以逐渐近似向后传播中的二进制函数,从而有效地保留了梯度信息并进一步提高了重建质量。此外,在我们的重建子网中,我们采用了具有高效率和解释性的基于模型的网络设计方案,该方案可帮助进一步利用采样子网。在两个广泛使用的MRI数据集上进行的广泛实验表明,我们提出的PUERT不仅在定量指标和视觉质量方面取得了最先进的结果,而且还产生了子采样模式和重建模型,这些模型均已定制为训练数据。

Compressed Sensing MRI (CS-MRI) aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging, thus presenting two basic issues, i.e., where to sample and how to reconstruct. To deal with both problems simultaneously, we propose a novel end-to-end Probabilistic Under-sampling and Explicable Reconstruction neTwork, dubbed PUERT, to jointly optimize the sampling pattern and the reconstruction network. Instead of learning a deterministic mask, the proposed sampling subnet explores an optimal probabilistic sub-sampling pattern, which describes independent Bernoulli random variables at each possible sampling point, thus retaining robustness and stochastics for a more reliable CS reconstruction. A dynamic gradient estimation strategy is further introduced to gradually approximate the binarization function in backward propagation, which efficiently preserves the gradient information and further improves the reconstruction quality. Moreover, in our reconstruction subnet, we adopt a model-based network design scheme with high efficiency and interpretability, which is shown to assist in further exploitation for the sampling subnet. Extensive experiments on two widely used MRI datasets demonstrate that our proposed PUERT not only achieves state-of-the-art results in terms of both quantitative metrics and visual quality but also yields a sub-sampling pattern and a reconstruction model that are both customized to training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源