论文标题
双重OCTREE图网络,用于学习自适应体积形状表示
Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations
论文作者
论文摘要
我们提出了3D形状的体积场的自适应深度表示,并提出了一种有效的方法,以学习这种深度表示的高质量3D形状重建和自动编码。我们的方法编码3D形状的体积字段,其自适应特征体积由OCTREE组织,并应用了一个紧凑的多层perceptron网络,以将特征映射到每个3D位置处的场值。编码器 - 模块网络旨在根据OCTREE节点的双图学习基于图形卷积的自适应特征量。我们网络的核心是一个新的图卷积运算符,该操作员在不同级别的不规则邻近的OCTREE节点融合的特征上定义了定义的网格,这不仅降低了相邻的OCTREE节点的卷积的计算和记忆成本,还可以改善功能学习的性能。我们的方法有效地编码了形状细节,实现了快速的3D形状重建,并具有良好的通用性,用于对训练类别进行建模3D形状。我们在一组3D形状和场景的重建任务上评估了我们的方法,并验证其优越性优于其他现有方法。我们的代码,数据和受过训练的模型可在https://wang-ps.github.io/dualocnn上找到。
We present an adaptive deep representation of volumetric fields of 3D shapes and an efficient approach to learn this deep representation for high-quality 3D shape reconstruction and auto-encoding. Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree and applies a compact multilayer perceptron network for mapping the features to the field value at each 3D position. An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes. The core of our network is a new graph convolution operator defined over a regular grid of features fused from irregular neighboring octree nodes at different levels, which not only reduces the computational and memory cost of the convolutions over irregular neighboring octree nodes, but also improves the performance of feature learning. Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories. We evaluate our method on a set of reconstruction tasks of 3D shapes and scenes and validate its superiority over other existing approaches. Our code, data, and trained models are available at https://wang-ps.github.io/dualocnn.