论文标题
动漫到真实的服装:通过图像到图像翻译生成角色扮演服装
Anime-to-Real Clothing: Cosplay Costume Generation via Image-to-Image Translation
论文作者
论文摘要
角色扮演从粉丝大会的起源发展到了十亿美元的全球礼服现象。为了促进从动画图像到真实服装的想象力和重新解释,本文根据图像到图像翻译提供了一种自动服装图像生成方法。角色扮演项目的样式和形状可能会大不相同,并且传统方法不能直接应用于本研究重点的服装图像的广泛变化。为了解决此问题,我们的方法首先收集和预处理Web图像以准备动漫和真实域的清洁,配对的数据集。然后,我们提出了一种用于生成对抗网络(GAN)的新型体系结构,以促进高质量的角色扮演图像生成。我们的gan包括几种有效的技术来填补两个域之间的空白,并提高生成图像的全球和局部一致性。实验表明,使用两种类型的评估指标,所提出的GAN的性能比现有方法更好。我们还表明,所提出的方法产生的图像比传统方法生成的图像更现实。我们的代码和预估计的模型可在网络上找到。
Cosplay has grown from its origins at fan conventions into a billion-dollar global dress phenomenon. To facilitate imagination and reinterpretation from animated images to real garments, this paper presents an automatic costume image generation method based on image-to-image translation. Cosplay items can be significantly diverse in their styles and shapes, and conventional methods cannot be directly applied to the wide variation in clothing images that are the focus of this study. To solve this problem, our method starts by collecting and preprocessing web images to prepare a cleaned, paired dataset of the anime and real domains. Then, we present a novel architecture for generative adversarial networks (GANs) to facilitate high-quality cosplay image generation. Our GAN consists of several effective techniques to fill the gap between the two domains and improve both the global and local consistency of generated images. Experiments demonstrated that, with two types of evaluation metrics, the proposed GAN achieves better performance than existing methods. We also showed that the images generated by the proposed method are more realistic than those generated by the conventional methods. Our codes and pretrained model are available on the web.