论文标题
学习3D重建的可变形四面体网格
Learning Deformable Tetrahedral Meshes for 3D Reconstruction
论文作者
论文摘要
适应基于学习的3D重建的3D形状表示是机器学习和计算机图形学的一个开放问题。关于神经3D重建的先前工作表明了点云,体素,表面网格和隐式功能表示的好处,也表明了局限性。我们将可变形的四面体网格(feftet)引入了特定的参数化,该参数利用体积四面体网格用于重建问题。与现有的体积方法不同,为顶点放置和占用率优化了Deftet,并且相对于标准的3D重建损失函数是可区分的。因此,它同时具有高精度,体积和适合基于学习的神经体系结构。我们表明,它可以代表任意,复杂的拓扑,既是记忆和计算上有效的,并且可以产生与替代体积方法相比,网格大小明显小得多的高保真重建。预测的表面也固有地定义为四面体网格,因此不需要后处理。我们证明,巧妙的匹配或超过了先前最佳方法的质量和最快的方法的性能。我们的方法获得了直接从嘈杂点云计算的高质量四面体网格,并且是第一个仅使用单个图像作为输入来展示高质量3D TET-MESH结果的人。我们的项目网页:https://nv-tlabs.github.io/deftet/
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics. Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations. We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem. Unlike existing volumetric approaches, DefTet optimizes for both vertex placement and occupancy, and is differentiable with respect to standard 3D reconstruction loss functions. It is thus simultaneously high-precision, volumetric, and amenable to learning-based neural architectures. We show that it can represent arbitrary, complex topology, is both memory and computationally efficient, and can produce high-fidelity reconstructions with a significantly smaller grid size than alternative volumetric approaches. The predicted surfaces are also inherently defined as tetrahedral meshes, thus do not require post-processing. We demonstrate that DefTet matches or exceeds both the quality of the previous best approaches and the performance of the fastest ones. Our approach obtains high-quality tetrahedral meshes computed directly from noisy point clouds, and is the first to showcase high-quality 3D tet-mesh results using only a single image as input. Our project webpage: https://nv-tlabs.github.io/DefTet/