论文标题

多输出是DeBlur所需的全部

Multi-Outputs Is All You Need For Deblur

论文作者

Liu, Sidun, Qiao, Peng, Dou, Yong

论文摘要

图像脱毛的任务是一个不适的任务,其中存在无限的可行解决方案来模糊图像。现代深度学习方法通​​常会丢弃模糊内核的学习,并直接采用端到端的监督学习。流行的DeBlurring数据集将标签定义为可行解决方案之一。但是,我们认为直接指定标签是不合理的,尤其是当从随机分布中采样标签时。因此,我们建议使网络学习可行解决方案的分布,并基于此考虑,设计了一种新型的多头输出体系结构和分配学习的相应损失函数。我们的方法使该模型能够输出多个可行解决方案以近似目标分布。我们进一步提出了一种新型的参数多路复用方法,该方法可以减少参数和计算工作的数量,同时提高性能。我们在包括当前的最新NAFNET在内的多个图像塑性模型上评估了我们的方法。最佳总体(在每个验证图像中选择最高得分)的提高PSNR优于比较基线的0.11〜0.18dB。最佳单头的改善(在验证集的多个头部中选择表现最佳的头部)PSNR优于比较的基准高达0.04〜0.08dB。这些代码可在https://github.com/liu-sd/multi-oledput-deblur上找到。

Image deblurring task is an ill-posed one, where exists infinite feasible solutions for blurry image. Modern deep learning approaches usually discard the learning of blur kernels and directly employ end-to-end supervised learning. Popular deblurring datasets define the label as one of the feasible solutions. However, we argue that it's not reasonable to specify a label directly, especially when the label is sampled from a random distribution. Therefore, we propose to make the network learn the distribution of feasible solutions, and design based on this consideration a novel multi-head output architecture and corresponding loss function for distribution learning. Our approach enables the model to output multiple feasible solutions to approximate the target distribution. We further propose a novel parameter multiplexing method that reduces the number of parameters and computational effort while improving performance. We evaluated our approach on multiple image-deblur models, including the current state-of-the-art NAFNet. The improvement of best overall (pick the highest score among multiple heads for each validation image) PSNR outperforms the compared baselines up to 0.11~0.18dB. The improvement of the best single head (pick the best-performed head among multiple heads on validation set) PSNR outperforms the compared baselines up to 0.04~0.08dB. The codes are available at https://github.com/Liu-SD/multi-output-deblur.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源