论文标题
学习图像翘曲的本地隐性傅立叶表示
Learning Local Implicit Fourier Representation for Image Warping
论文作者
论文摘要
图像翘曲的目的是将矩形网格定义的图像重塑为任意形状。最近,隐式神经功能在以连续方式表示图像时表现出了显着的性能。然而,独立的多层感知器会受到学习高频傅立叶系数的影响。在本文中,我们提出了图像翘曲(LTEW)的局部纹理估计器,然后提出隐式神经表示,以将图像变形为连续形状。从深度超分辨率(SR)骨架估计的局部纹理乘以坐标转换的局部变化的雅各布矩阵,以预测扭曲的图像的傅立叶响应。我们的基于LTEW的神经功能优于现有的扭曲方法,用于不对称尺度的SR和SOMANSOGRAGH TRUNSSION。此外,我们的算法很好地概括了任意的坐标变换,例如具有较大放大因子和等应角投影(ERP)透视变换的构造转换,这些变换在训练中未提供。
Image warping aims to reshape images defined on rectangular grids into arbitrary shapes. Recently, implicit neural functions have shown remarkable performances in representing images in a continuous manner. However, a standalone multi-layer perceptron suffers from learning high-frequency Fourier coefficients. In this paper, we propose a local texture estimator for image warping (LTEW) followed by an implicit neural representation to deform images into continuous shapes. Local textures estimated from a deep super-resolution (SR) backbone are multiplied by locally-varying Jacobian matrices of a coordinate transformation to predict Fourier responses of a warped image. Our LTEW-based neural function outperforms existing warping methods for asymmetric-scale SR and homography transform. Furthermore, our algorithm well generalizes arbitrary coordinate transformations, such as homography transform with a large magnification factor and equirectangular projection (ERP) perspective transform, which are not provided in training.