论文标题
3D Equivariant图隐含函数
3D Equivariant Graph Implicit Functions
论文作者
论文摘要
近年来,神经隐式表示在具有任意拓扑的3D形状建模方面取得了显着进步。在这项工作中,我们解决了此类表示形式的两个关键局限性,未能捕获本地3D几何细节,并从未看到的3D变换的形状学习和概括到形状。为此,我们介绍了一个新颖的图形隐式函数,具有模棱两可的层,可通过本地$ k $ -nn图嵌入,在多个分辨率下进行观察值,从而促进了对各个几何变换的精细局部细节和确保对各个几何变换的鲁棒性。我们的方法将现有的旋转等值式隐式函数从Shapenet重建任务上的0.69(IOU)提高到0.89(IOU)。我们还表明,我们的模棱两可的隐式函数可以扩展到其他类型的相似性转换,并概括到看不见的翻译和缩放。
In recent years, neural implicit representations have made remarkable progress in modeling of 3D shapes with arbitrary topology. In this work, we address two key limitations of such representations, in failing to capture local 3D geometric fine details, and to learn from and generalize to shapes with unseen 3D transformations. To this end, we introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details and guaranteed robustness to various groups of geometric transformations, through local $k$-NN graph embeddings with sparse point set observations at multiple resolutions. Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 (IoU) on the ShapeNet reconstruction task. We also show that our equivariant implicit function can be extended to other types of similarity transformations and generalizes to unseen translations and scaling.