论文标题
TM-NET:纹理网格的深层生成网络
TM-NET: Deep Generative Networks for Textured Meshes
论文作者
论文摘要
我们介绍了TM-NET,这是一种新型的深层生成模型,用于以部分感知方式合成纹理网格。训练后,网络可以从划痕中生成新颖的纹理网格,或者在没有图像指导的情况下为给定的3D网格预测纹理。可以为相同的网格部分生成合理和多样的纹理,而相同形状的零件之间的纹理兼容性是通过有条件产生实现的。具体而言,我们的方法生成了单个形状零件的纹理图,每个零件都是可变形的盒子,导致自然紫外线图具有最小的失真。该网络分别嵌入了部分几何形状(通过零件vae),并将部分纹理(通过纹理为)嵌入其各自的潜在空间中,以促进学习纹理概率分布以几何为条件。我们引入了一种有条件的自回旋模型,用于纹理生成,可以在两部分的几何形状上进行条件,并且已经生成了其他零件以实现纹理兼容性的纹理。为了产生高频纹理细节,我们的TextureVae通过基于字典的矢量量化在高维的潜在空间中运行。我们还利用纹理中的透明胶片作为建模复杂形状结构(包括拓扑细节)的有效手段。广泛的实验证明了我们网络产生的纹理和几何形状的合理性,质量和多样性,同时避免了新型视图合成方法常见的不一致问题。
We introduce TM-NET, a novel deep generative model for synthesizing textured meshes in a part-aware manner. Once trained, the network can generate novel textured meshes from scratch or predict textures for a given 3D mesh, without image guidance. Plausible and diverse textures can be generated for the same mesh part, while texture compatibility between parts in the same shape is achieved via conditional generation. Specifically, our method produces texture maps for individual shape parts, each as a deformable box, leading to a natural UV map with minimal distortion. The network separately embeds part geometry (via a PartVAE) and part texture (via a TextureVAE) into their respective latent spaces, so as to facilitate learning texture probability distributions conditioned on geometry. We introduce a conditional autoregressive model for texture generation, which can be conditioned on both part geometry and textures already generated for other parts to achieve texture compatibility. To produce high-frequency texture details, our TextureVAE operates in a high-dimensional latent space via dictionary-based vector quantization. We also exploit transparencies in the texture as an effective means to model complex shape structures including topological details. Extensive experiments demonstrate the plausibility, quality, and diversity of the textures and geometries generated by our network, while avoiding inconsistency issues that are common to novel view synthesis methods.