论文标题
无监督的3D姿势转移具有交叉一致性和双重重建
Unsupervised 3D Pose Transfer with Cross Consistency and Dual Reconstruction
论文作者
论文摘要
3D姿势转移的目的是将姿势从源网格转移到目标网格,同时保留目标网格的身份信息(例如,面部,身体形状)。基于深度学习的方法提高了3D姿势转移的效率和性能。但是,他们中的大多数人受到地面真相的监督的培训,在地面真相的监督下,在现实世界中的可用性受到限制。在这项工作中,我们提出了X-Dualnet,这是一种简单而有效的方法,可以实现无监督的3D姿势转移。在X-Dualnet中,我们引入了一个发电机$ G $,其中包含对应学习和姿势转移模块以实现3D姿势转移。我们通过解决最佳传输问题而没有任何关键点注释,并在姿势传递模块中与我们的弹性实例归一化(Elain)一起学习形状对应。以$ g $作为基本组件,我们提出了一个交叉一致性学习方案和双重重建目标,可以在不监督的情况下学习姿势转移。除此之外,我们还在训练过程中采用了可行的刚性变形器,以微调生成的结果的身体形状。关于人类和动物数据的广泛实验表明,我们的框架可以成功地达到最先进的监督方法。
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information (e.g., face, body shape) of the target mesh. Deep learning-based methods improved the efficiency and performance of 3D pose transfer. However, most of them are trained under the supervision of the ground truth, whose availability is limited in real-world scenarios. In this work, we present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer. In X-DualNet, we introduce a generator $G$ which contains correspondence learning and pose transfer modules to achieve 3D pose transfer. We learn the shape correspondence by solving an optimal transport problem without any key point annotations and generate high-quality meshes with our elastic instance normalization (ElaIN) in the pose transfer module. With $G$ as the basic component, we propose a cross consistency learning scheme and a dual reconstruction objective to learn the pose transfer without supervision. Besides that, we also adopt an as-rigid-as-possible deformer in the training process to fine-tune the body shape of the generated results. Extensive experiments on human and animal data demonstrate that our framework can successfully achieve comparable performance as the state-of-the-art supervised approaches.