论文标题
CANF-VC:有条件的增强归一化流量以进行视频压缩
CANF-VC: Conditional Augmented Normalizing Flows for Video Compression
论文作者
论文摘要
本文根据有条件的增强归一化流(CANF)提出了一个基于端到端的基于学习的视频压缩系统,称为CANF-VC。大多数博学的视频压缩系统采用与传统编解码器相同的基于混合的编码体系结构。关于条件编码的最新研究表明,基于混合的编码的亚地区,并为深层生成模型打开了在创建新的编码框架中发挥关键作用的机会。 CANF-VC代表了一种新的尝试,该尝试利用条件ANF学习有条件框架间编码的视频生成模型。我们之所以选择ANF,是因为它是一种特殊类型的生成模型,其中包括各种自动编码器作为一种特殊情况,并且能够获得更好的表现力。 CANF-VC还将条件编码的想法扩展到运动编码,形成纯粹的条件编码框架。对常用数据集的广泛实验结果证实了CANF-VC对最新方法的优越性。 CANF-VC的源代码可在https://github.com/nycu-mapl/canf-vc上找到。
This paper presents an end-to-end learning-based video compression system, termed CANF-VC, based on conditional augmented normalizing flows (CANF). Most learned video compression systems adopt the same hybrid-based coding architecture as the traditional codecs. Recent research on conditional coding has shown the sub-optimality of the hybrid-based coding and opens up opportunities for deep generative models to take a key role in creating new coding frameworks. CANF-VC represents a new attempt that leverages the conditional ANF to learn a video generative model for conditional inter-frame coding. We choose ANF because it is a special type of generative model, which includes variational autoencoder as a special case and is able to achieve better expressiveness. CANF-VC also extends the idea of conditional coding to motion coding, forming a purely conditional coding framework. Extensive experimental results on commonly used datasets confirm the superiority of CANF-VC to the state-of-the-art methods. The source code of CANF-VC is available at https://github.com/NYCU-MAPL/CANF-VC.