论文标题
关于权重编码神经隐式3D形状的有效性
On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes
论文作者
论文摘要
神经隐式输出一个数字,指示空间中给定的查询点是内部,外部还是表面上的。许多先前的作品都集中在_latent-necoded_神经隐含上,其中特定形状的潜在向量编码也被送入输入。在提供潜在空间插值的同时,这是以任何_single_形状的重建精度为代价的。训练每个3D形状的特定网络,_ Weaber-necoded_神经隐式可能会放弃潜在向量,并在单个形状的细节上进行重点重建精度。尽管以前被认为是3D扫描任务的中间表示形式或导致潜在编码任务的玩具问题,但重量编码的神经隐含尚未被重视为3D形状表示。在本文中,我们确定重量编码的神经隐含符合一流的3D形状表示标准。当学习由多边形网格引起的签名距离字段时,我们引入了一系列技术贡献,以提高重建精度,收敛性和鲁棒性 - _de facto_标准表示。被视为一种有损的压缩,我们的转换优于几何处理中的标准技术。与以前的潜在和体重编码的神经隐含相比,我们证明了较高的鲁棒性,可伸缩性和性能。
A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface. Many prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input. While affording latent-space interpolation, this comes at the cost of reconstruction accuracy for any _single_ shape. Training a specific network for each 3D shape, a _weight-encoded_ neural implicit may forgo the latent vector and focus reconstruction accuracy on the details of a single shape. While previously considered as an intermediary representation for 3D scanning tasks or as a toy-problem leading up to latent-encoding tasks, weight-encoded neural implicits have not yet been taken seriously as a 3D shape representation. In this paper, we establish that weight-encoded neural implicits meet the criteria of a first-class 3D shape representation. We introduce a suite of technical contributions to improve reconstruction accuracy, convergence, and robustness when learning the signed distance field induced by a polygonal mesh -- the _de facto_ standard representation. Viewed as a lossy compression, our conversion outperforms standard techniques from geometry processing. Compared to previous latent- and weight-encoded neural implicits we demonstrate superior robustness, scalability, and performance.