论文标题

Deep Fashion3D:单图像的3D服装重建的数据集和基准

Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images

论文作者

Zhu, Heming, Cao, Yu, Jin, Hang, Chen, Weikai, Du, Dong, Wang, Zhangye, Cui, Shuguang, Han, Xiaoguang

论文摘要

高保真服装重建是在包括人数数字化,虚拟试验等广泛应用中实现光真实主义的关键。基于学习的方法的最新进展已实现了恢复未经许可的人类形状和姿势的前所未有的准确性,这要归功于强大的统计模型,例如。 SMPL,从大量的身体扫描中学到。相比之下,众所周知,建模和恢复穿衣服的人和3D服装仍然很困难,这主要是由于缺乏可用于研究社区的大型服装模型。我们建议通过引入迄今为止3D服装模型的最大系列Deep Fashion3D来填补这一空白,其目的是建立一个新颖的基准和数据集,以评估基于图像的服装重建系统。 Deep Fashion3D包含2078型型号,这些型号从真实的服装中重建,其中涵盖了10种不同的类别和563个服装实例。它提供了丰富的注释,包括3D特征线,3D身体姿势和相应的多视图真实图像。此外,每次服装都会随机摆姿势,以增强实际服装变形的种类。为了证明Deep Fashion3D的优势,我们提出了一种新颖的基线方法,以进行单视服装重建,该方法利用网格和隐式表示的优点。提出了一种新颖的适应模板,以便能够学习单个网络中的所有类型的衣服。已经在拟议的数据集上进行了广泛的实验,以验证其意义和有用性。我们将在出版后公开提供Deep Fashion3D。

High-fidelity clothing reconstruction is the key to achieving photorealism in a wide range of applications including human digitization, virtual try-on, etc. Recent advances in learning-based approaches have accomplished unprecedented accuracy in recovering unclothed human shape and pose from single images, thanks to the availability of powerful statistical models, e.g. SMPL, learned from a large number of body scans. In contrast, modeling and recovering clothed human and 3D garments remains notoriously difficult, mostly due to the lack of large-scale clothing models available for the research community. We propose to fill this gap by introducing Deep Fashion3D, the largest collection to date of 3D garment models, with the goal of establishing a novel benchmark and dataset for the evaluation of image-based garment reconstruction systems. Deep Fashion3D contains 2078 models reconstructed from real garments, which covers 10 different categories and 563 garment instances. It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images. In addition, each garment is randomly posed to enhance the variety of real clothing deformations. To demonstrate the advantage of Deep Fashion3D, we propose a novel baseline approach for single-view garment reconstruction, which leverages the merits of both mesh and implicit representations. A novel adaptable template is proposed to enable the learning of all types of clothing in a single network. Extensive experiments have been conducted on the proposed dataset to verify its significance and usefulness. We will make Deep Fashion3D publicly available upon publication.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源