论文标题

PSSNET:大规模城市网格的平面性语义分割

PSSNet: Planarity-sensible Semantic Segmentation of Large-scale Urban Meshes

论文作者

Gao, Weixiao, Nan, Liangliang, Boom, Bas, Ledoux, Hugo

论文摘要

我们介绍了一个新颖的基于深度学习的框架,以解释以纹理网格为代表的3D城市场景。基于对象边界通常与平面区域边界保持一致的观察结果,我们的框架分为两个步骤进行语义分割:平面感知的过度分割,然后进行语义分类。过度分割的步骤生成了一组初始的网段,这些网段捕获了城市场景的平面和非平面区域。在随后的分类步骤中,我们构造了一个图表,该图编码了其节点中段的几何和光度特征以及其边缘中的多尺度上下文特征。最终的语义分割是通过使用图形卷积网络对段进行分类来获得的。对两个语义城市网格基准的实验和比较表明,我们的方法在边界质量,含义IOU(与联合交集)和概括能力方面优于最新方法。我们还介绍了几个新的指标,用于评估针对语义分割的网格过度分割方法,而我们提出的过度分割方法优于所有指标的最先进方法。我们的源代码可在\ url {https://github.com/weixiaogao/pssnet}上获得。

We introduce a novel deep learning-based framework to interpret 3D urban scenes represented as textured meshes. Based on the observation that object boundaries typically align with the boundaries of planar regions, our framework achieves semantic segmentation in two steps: planarity-sensible over-segmentation followed by semantic classification. The over-segmentation step generates an initial set of mesh segments that capture the planar and non-planar regions of urban scenes. In the subsequent classification step, we construct a graph that encodes the geometric and photometric features of the segments in its nodes and the multi-scale contextual features in its edges. The final semantic segmentation is obtained by classifying the segments using a graph convolutional network. Experiments and comparisons on two semantic urban mesh benchmarks demonstrate that our approach outperforms the state-of-the-art methods in terms of boundary quality, mean IoU (intersection over union), and generalization ability. We also introduce several new metrics for evaluating mesh over-segmentation methods dedicated to semantic segmentation, and our proposed over-segmentation approach outperforms state-of-the-art methods on all metrics. Our source code is available at \url{https://github.com/WeixiaoGao/PSSNet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源