论文标题

学习将质地从衣服图像转移到3D人类

Learning to Transfer Texture from Clothing Images to 3D Humans

论文作者

Mir, Aymen, Alldieck, Thiemo, Pons-Moll, Gerard

论文摘要

在本文中,我们提出了一种简单而有效的方法,可以实时将衣物图像(正面和背面)的纹理自动传递到顶部SMPL上的3D服装。我们首先使用自定义的非刚性3D到2D登记方法自动计算使用带对齐的3D服装的图像的训练对,该方法准确但缓慢。使用这些对,我们学习了从像素到3D服装表面的映射。我们的想法是仅使用形状信息来学习从服装图像剪影到3D服装表面的2D-UV图,完全忽略了纹理,这使我们能够概括为广泛的Web图像。几个实验表明,我们的模型比广泛使用的基线(例如薄板式翘曲和图像到图像翻译网络)更准确,同时更快地订单。我们的模型为诸如虚拟试验之类的应用打开了大门,并允许生成具有多种纹理的3D人类,这是学习所必需的。

In this paper, we present a simple yet effective method to automatically transfer textures of clothing images (front and back) to 3D garments worn on top SMPL, in real time. We first automatically compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow. Using these pairs, we learn a mapping from pixels to the 3D garment surface. Our idea is to learn dense correspondences from garment image silhouettes to a 2D-UV map of a 3D garment surface using shape information alone, completely ignoring texture, which allows us to generalize to the wide range of web images. Several experiments demonstrate that our model is more accurate than widely used baselines such as thin-plate-spline warping and image-to-image translation networks while being orders of magnitude faster. Our model opens the door for applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源