论文标题

单图超分辨率的可解释的细节前景注意力网络

Interpretable Detail-Fidelity Attention Network for Single Image Super-Resolution

论文作者

Huang, Yuanfei, Li, Jie, Gao, Xinbo, Hu, Yanting, Lu, Wen

论文摘要

受益于深入CNN的功能强大的特征表示和非线性映射的功能,基于深度学习的方法在单图像超分辨率中实现了出色的性能。但是,大多数现有的SR方法取决于最初设计用于视觉识别的网络的高容量,并且很少考虑超分辨率的初始意图以详细说明。为了追求这一意图,有两个具有挑战性的问题要解决:(1)学习适当的操作员,这些操作员适应了平滑和细节的各种特征; (2)提高模型保留低频平滑并重建高频细节的能力。为了解决它们,我们提出了一个有目的且可解释的细节前进的注意力网络,以分裂和互动方式逐步处理这些平滑和细节,这是图像超级分辨率的新颖而特定的前景,目的是改善细节忠诚度,而不是盲目地指定或利用当地接收领域中仅具有深层CNNS体系结构的形式。特别是,我们提出了可解释特征表示的Hessian滤波,该滤波器具有详细的推断,扩张的编码器编码器和分布比对单元,以分别以形态学方式和统计方式改善推断的Hessian特征。广泛的实验表明,所提出的方法在定量和定性上实现了优于最先进方法的卓越性能。代码可从https://github.com/yuanfeihuang/defian获得。

Benefiting from the strong capabilities of deep CNNs for feature representation and nonlinear mapping, deep-learning-based methods have achieved excellent performance in single image super-resolution. However, most existing SR methods depend on the high capacity of networks which is initially designed for visual recognition, and rarely consider the initial intention of super-resolution for detail fidelity. Aiming at pursuing this intention, there are two challenging issues to be solved: (1) learning appropriate operators which is adaptive to the diverse characteristics of smoothes and details; (2) improving the ability of model to preserve the low-frequency smoothes and reconstruct the high-frequency details. To solve them, we propose a purposeful and interpretable detail-fidelity attention network to progressively process these smoothes and details in divide-and-conquer manner, which is a novel and specific prospect of image super-resolution for the purpose on improving the detail fidelity, instead of blindly designing or employing the deep CNNs architectures for merely feature representation in local receptive fields. Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference, a dilated encoder-decoder and a distribution alignment cell to improve the inferred Hessian features in morphological manner and statistical manner respectively. Extensive experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods quantitatively and qualitatively. Code is available at https://github.com/YuanfeiHuang/DeFiAN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源