论文标题
查找:无监督的隐式3D铰接的人脚模型
FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet
论文作者
论文摘要
在本文中,我们提出了高保真度和表达的3D人脚模型。该模型是通过形状,质地和铰接式姿势的分离潜在代码参数化的。虽然高保真模型通常是通过强大的监督(例如3D Keypoint对应关系或预注册)创建的,但我们重点介绍了很少或没有注释的困难情况。为此,我们做出以下贡献:(i)我们开发一个脚隐式神经变形场模型,名为Find,能够以任何分辨率来调整明确的网格,即对于低功率或高功率设备; (ii)一种在各种薄弱的监督模式下培训我们的模型的方法,并逐步提供更好的分解,因为提供了更多的标签,例如姿势类别; (iii)一种新型的基于无监督的零件损失,将我们的模型拟合到2D图像中,它比传统的光度或轮廓损失更好; (iv)最后,我们发布了一个高分辨率3D人脚扫描的新数据集,foot3d。在此数据集上,我们显示我们的模型优于强大的PCA实现,该实现在形状质量和部分对应关系方面对相同的数据进行了训练,并且我们的新颖的基于无监督的零件损失可改善对图像的推论。
In this paper we present a high fidelity and articulated 3D human foot model. The model is parameterised by a disentangled latent code in terms of shape, texture and articulated pose. While high fidelity models are typically created with strong supervision such as 3D keypoint correspondences or pre-registration, we focus on the difficult case of little to no annotation. To this end, we make the following contributions: (i) we develop a Foot Implicit Neural Deformation field model, named FIND, capable of tailoring explicit meshes at any resolution i.e. for low or high powered devices; (ii) an approach for training our model in various modes of weak supervision with progressively better disentanglement as more labels, such as pose categories, are provided; (iii) a novel unsupervised part-based loss for fitting our model to 2D images which is better than traditional photometric or silhouette losses; (iv) finally, we release a new dataset of high resolution 3D human foot scans, Foot3D. On this dataset, we show our model outperforms a strong PCA implementation trained on the same data in terms of shape quality and part correspondences, and that our novel unsupervised part-based loss improves inference on images.