论文标题

基于骨架的人类互动识别的两人图卷积网络

Two-person Graph Convolutional Network for Skeleton-based Human Interaction Recognition

论文作者

Li, Zhengcen, Li, Yueran, Tang, Linlin, Zhang, Tong, Su, Jingyong

论文摘要

图形卷积网络(GCN)一直是基于骨架的人类作用识别(包括人类相互作用识别)的主要方法。但是,在处理交互序列时,当前基于GCN的方法只需将两人骨架拆分为两个离散图,然后单独执行图形卷积,以进行单人动作分类。此类操作忽略了丰富的交互信息,并阻碍了有效的空间间体间关系建模。为了克服上述缺点,我们引入了一个新型的统一的两人图,以表示关节之间的体内和体内相关性。实验在利用所提出的两人图形拓扑时表明,在识别相互作用和单个动作方面的准确性提高。此外,我们设计了几种图形标记策略,以监督模型以学习判别时空交互功能。最后,我们提出了一个两人的图形卷积网络(2P-GCN)。我们的模型在三个交互数据集的四个基准上获得了最新的结果:SBU,NTU-RGB+D和NTU-RGB+D 120的相互作用子集。

Graph convolutional networks (GCNs) have been the predominant methods in skeleton-based human action recognition, including human-human interaction recognition. However, when dealing with interaction sequences, current GCN-based methods simply split the two-person skeleton into two discrete graphs and perform graph convolution separately as done for single-person action classification. Such operations ignore rich interactive information and hinder effective spatial inter-body relationship modeling. To overcome the above shortcoming, we introduce a novel unified two-person graph to represent inter-body and intra-body correlations between joints. Experiments show accuracy improvements in recognizing both interactions and individual actions when utilizing the proposed two-person graph topology. In addition, We design several graph labeling strategies to supervise the model to learn discriminant spatial-temporal interactive features. Finally, we propose a two-person graph convolutional network (2P-GCN). Our model achieves state-of-the-art results on four benchmarks of three interaction datasets: SBU, interaction subsets of NTU-RGB+D and NTU-RGB+D 120.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源