论文标题

在视频修复的判别空间中学习损失功能

Learning the Loss Functions in a Discriminative Space for Video Restoration

论文作者

Jo, Younghyun, Kang, Jaeyeon, Oh, Seoung Wug, Nam, Seonghyeon, Vajda, Peter, Kim, Seon Joo

论文摘要

凭借更先进的深层网络架构和学习方案,例如gan,视频恢复算法的性能最近得到了很大改善。同时,优化深层神经网络的损失功能保持相对不变。为此,我们通过学习特定于视频恢复任务的区分空间来提出一个新的框架来构建有效的损失功能。我们的框架类似于gan,因为我们迭代训练两个网络 - 一个发电机和一个损失网络。发电机学会了以监督方式恢复视频,通过通过损失网络学到的判别空间中的功能匹配来遵循地面真相功能。此外,我们还引入了新的关系损失,以保持输出视频的时间一致性。视频序列和DeBlurring的实验表明,与其他最新方法相比,我们的方法具有更高的定量感知度量值的视觉上更令人愉悦的视频。

With more advanced deep network architectures and learning schemes such as GANs, the performance of video restoration algorithms has greatly improved recently. Meanwhile, the loss functions for optimizing deep neural networks remain relatively unchanged. To this end, we propose a new framework for building effective loss functions by learning a discriminative space specific to a video restoration task. Our framework is similar to GANs in that we iteratively train two networks - a generator and a loss network. The generator learns to restore videos in a supervised fashion, by following ground truth features through the feature matching in the discriminative space learned by the loss network. In addition, we also introduce a new relation loss in order to maintain the temporal consistency in output videos. Experiments on video superresolution and deblurring show that our method generates visually more pleasing videos with better quantitative perceptual metric values than the other state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源