论文标题

加博(Gabor)就足够了:可解释的深层deno denoing用gabor综合词典先验

Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis Dictionary Prior

论文作者

Janjušević, Nikola, Khalilian-Gourtani, Amirhossein, Wang, Yao

论文摘要

自然和人造的图像处理神经网络具有悠久的历史,并具有定向选择性,通常在数学上被描述为Gabor过滤器。在CNN分类器的早期层,甚至在整个低级图像处理网络中都观察到类似Gabor的过滤器。在这项工作中,我们将这一观察结果带到了极端并明确限制自然图像denoising CNN的过滤器,以学习2D真实的Gabor过滤器。令人惊讶的是,我们发现所提出的网络(GDLNET)可以在流行的完全卷积神经网络中获得最新的降级性能,只有一小部分学习的参数。我们进一步验证该参数化维持基本网络的噪声级概括(训练与推理不匹配)特征,并研究单个Gabor滤波器参数对DeOiser的性能的贡献。我们通过解释字典学习网络的解释来提出积极的发现,因为通过网络层之间未接触的学习比例参数的重要性来执行加速的稀疏编码。我们的网络的成功表明,低级图像处理CNN使用的表示形式可以像Gabor FilterBanks一样简单且可解释。

Image processing neural networks, natural and artificial, have a long history with orientation-selectivity, often described mathematically as Gabor filters. Gabor-like filters have been observed in the early layers of CNN classifiers and even throughout low-level image processing networks. In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters. Surprisingly, we find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks, with only a fraction of the learned parameters. We further verify that this parameterization maintains the noise-level generalization (training vs. inference mismatch) characteristics of the base network, and investigate the contribution of individual Gabor filter parameters to the performance of the denoiser. We present positive findings for the interpretation of dictionary learning networks as performing accelerated sparse-coding via the importance of untied learned scale parameters between network layers. Our network's success suggests that representations used by low-level image processing CNNs can be as simple and interpretable as Gabor filterbanks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源