论文标题

D2-LRR:用于医学图像融合的双重分解的MDLATLR方法

D2-LRR: A Dual-Decomposed MDLatLRR Approach for Medical Image Fusion

论文作者

Song, Xu, Shen, Tianyu, Li, Hui, Wu, Xiao-Jun

论文摘要

在图像融合任务中,理想的图像分解方法可以带来更好的性能。 Mdlatlrr在这方面做得很好,但是仍然存在一些改进的空间。考虑到MDLATLRR仅关注从输入图像中通过潜在的低级表示(LATLRR)提取的详细零件(显着特征),因此未充分利用了LATLRR提取的基本部分(主要特征)。因此,我们引入了一种增强的多级分解方法,称为双重分解MDLATLRR(D2-LRR),该方法有效地分析并利用了通过LATLRR提取的所有图像特征。具体而言,颜色图像被转换为​​YUV颜色空间和灰度图像,Y通道和灰度图像输入了LATLRR训练的参数,以获取包含四轮分解和基本零件的详细零件。随后,使用平均策略融合基本零件,而细节部分则使用内核规范操作融合。融合的图像最终被转换回RGB图像,从而导致最终融合输出。我们将D2-LRR应用于医疗图像融合任务。详细的零件使用核电 - 手术融合,而基本零件则使用平均策略融合。现有方法之间的比较分析表明,我们提出的方法在客观和主观评估中都达到了最先进的融合性能。

In image fusion tasks, an ideal image decomposition method can bring better performance. MDLatLRR has done a great job in this aspect, but there is still exist some space for improvement. Considering that MDLatLRR focuses solely on the detailed parts (salient features) extracted from input images via latent low-rank representation (LatLRR), the basic parts (principal features) extracted by LatLRR are not fully utilized. Therefore, we introduced an enhanced multi-level decomposition method named dual-decomposed MDLatLRR (D2-LRR) which effectively analyzes and utilizes all image features extracted through LatLRR. Specifically, color images are converted into YUV color space and grayscale images, and the Y-channel and grayscale images are input into the trained parameters of LatLRR to obtain the detailed parts containing four rounds of decomposition and the basic parts. Subsequently, the basic parts are fused using an average strategy, while the detail part is fused using kernel norm operation. The fused image is ultimately transformed back into an RGB image, resulting in the final fusion output. We apply D2-LRR to medical image fusion tasks. The detailed parts are fused employing a nuclear-norm operation, while the basic parts are fused using an average strategy. Comparative analyses among existing methods showcase that our proposed approach attains cutting-edge fusion performance in both objective and subjective assessments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源