论文标题

带有图形注意力卷积神经网络的空中激光雷达点云分类

Airborne LiDAR Point Cloud Classification with Graph Attention Convolution Neural Network

论文作者

Wen, Congcong, Li, Xiang, Yao, Xiaojing, Peng, Ling, Chi, Tianhe

论文摘要

机载的光检测和范围(LIDAR)在城市规划,地形映射,环境监测,电力线检测和其他领域中起着越来越重要的作用,这要归功于其快速获取大规模和高度稳定地面信息的能力。为了实现点云分类,先前的研究提出了点云深学习模型,可以直接基于点网状体系结构处理原始点云。最近的一些作品根据点云的固有拓扑提出了图形卷积神经网络。但是,上述云深学习模型仅注意探索本地几何结构,但忽略了所有点之间的全球上下文关系。在本文中,我们提出了一个图形注意力卷积神经网络(GACNN),该神经网络可以直接应用于空气载激元中获得的非结构化3D点云的分类。具体来说,我们首先引入了一个图形注意力卷积模块,该模块包含了全局上下文信息和局部结构特征。基于提出的图形注意力卷积模块,我们进一步设计了一个名为GACNN的端到端编码器网络,以捕获点云的多尺度特征,从而启用更准确的空中点云分类。 ISPRS 3D标记数据集的实验表明,所提出的模型以F1得分(71.5 \%)和令人满意的总体准确性(83.2 \%)来实现新的最新性能。此外,通过与其他普遍的点云深学习模型进行比较,在2019年数据融合竞赛数据集上进一步进行了实验,证明了所提出的模型的有利的概括能力。

Airborne light detection and ranging (LiDAR) plays an increasingly significant role in urban planning, topographic mapping, environmental monitoring, power line detection and other fields thanks to its capability to quickly acquire large-scale and high-precision ground information. To achieve point cloud classification, previous studies proposed point cloud deep learning models that can directly process raw point clouds based on PointNet-like architectures. And some recent works proposed graph convolution neural network based on the inherent topology of point clouds. However, the above point cloud deep learning models only pay attention to exploring local geometric structures, yet ignore global contextual relationships among all points. In this paper, we present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR. Specifically, we first introduce a graph attention convolution module that incorporates global contextual information and local structural features. Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds and therefore enable more accurate airborne point cloud classification. Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5\%) and a satisfying overall accuracy (83.2\%). Additionally, experiments further conducted on the 2019 Data Fusion Contest Dataset by comparing with other prevalent point cloud deep learning models demonstrate the favorable generalization capability of the proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源