论文标题
Mangagan:基于漫画绘画方法的未配对的照片对孟加拉语翻译
MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga Drawing
论文作者
论文摘要
漫画是一种起源于日本的世界流行漫画形式,通常采用黑白中风线和几何夸张来描述人类的外表,姿势和行动。在本文中,我们提出了Mangagan,这是基于生成对抗网络(GAN)的第一种方法,用于未配对的照片对山加翻译。受经验丰富的漫画艺术家如何绘制漫画的启发,Mangagan通过设计的GAN模型产生了漫画面的几何特征,并通过量身定制的多型枪建筑将每个面部区域细致地转化为漫画领域。为了培训Mangagan,我们构建了一个新的数据集,该数据集从流行的漫画作品中收集,其中包含漫画面部特征,地标,身体等。此外,要产生高质量的漫画面,我们进一步提出了结构平滑的损失,以使中风线平滑并避免嘈杂的像素,并提供一个相似性,以确保模块以提高照片和漫画域之间的相似性。广泛的实验表明,Mangagan可以生产高质量的漫画面,从而保持面部相似性和流行的漫画风格,并且表现优于其他相关的最新方法。
Manga is a world popular comic form originated in Japan, which typically employs black-and-white stroke lines and geometric exaggeration to describe humans' appearances, poses, and actions. In this paper, we propose MangaGAN, the first method based on Generative Adversarial Network (GAN) for unpaired photo-to-manga translation. Inspired by how experienced manga artists draw manga, MangaGAN generates the geometric features of manga face by a designed GAN model and delicately translates each facial region into the manga domain by a tailored multi-GANs architecture. For training MangaGAN, we construct a new dataset collected from a popular manga work, containing manga facial features, landmarks, bodies, and so on. Moreover, to produce high-quality manga faces, we further propose a structural smoothing loss to smooth stroke-lines and avoid noisy pixels, and a similarity preserving module to improve the similarity between domains of photo and manga. Extensive experiments show that MangaGAN can produce high-quality manga faces which preserve both the facial similarity and a popular manga style, and outperforms other related state-of-the-art methods.