论文标题

超复合图像到图像翻译

Hypercomplex Image-to-Image Translation

论文作者

Grassucci, Eleonora, Sigillo, Luigi, Uncini, Aurelio, Comminiello, Danilo

论文摘要

图像到图像翻译(I2i)旨在将内容表示从输入域转移到输出域,沿着不同的目标域弹跳。最近在这项任务中获得出色成果的I2i生成模型包括一组各种深层网络,每个网络都有数以万计的参数。此外,图像通常是由RGB通道组成的三维,常见的神经模型不考虑维度相关性,而是失去了有益信息。在本文中,我们建议利用超复杂的代数属性来定义能够在图像维度之间保存预先存在的关系的轻量级I2I​​生成模型,从而利用其他输入信息。在歧管I2i基准上,我们展示了建议的Quaternion starganv2和参数化超复杂的StarganV2(Phstarganv2)如何减少参数和存储记忆量,同时确保通过FID和LPIPS得分测量的高域翻译性能和良好的图像质量。完整代码可在以下网址提供:https://github.com/ispamm/hi2i。

Image-to-image translation (I2I) aims at transferring the content representation from an input domain to an output one, bouncing along different target domains. Recent I2I generative models, which gain outstanding results in this task, comprise a set of diverse deep networks each with tens of million parameters. Moreover, images are usually three-dimensional being composed of RGB channels and common neural models do not take dimensions correlation into account, losing beneficial information. In this paper, we propose to leverage hypercomplex algebra properties to define lightweight I2I generative models capable of preserving pre-existing relations among image dimensions, thus exploiting additional input information. On manifold I2I benchmarks, we show how the proposed Quaternion StarGANv2 and parameterized hypercomplex StarGANv2 (PHStarGANv2) reduce parameters and storage memory amount while ensuring high domain translation performance and good image quality as measured by FID and LPIPS scores. Full code is available at: https://github.com/ispamm/HI2I.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源