论文标题
使用虚拟机体加速数据的细粒度人类活动识别
Fine-grained Human Activity Recognition Using Virtual On-body Acceleration Data
论文作者
论文摘要
先前的工作表明,使用Imutube(例如Imutube)从视频中提取的虚拟加速度计数据对培训复杂有效的人类活动识别(HAR)模型有益。像Imutube这样的系统最初是为涵盖基于实质体(部分)运动的活动而设计的。然而,生活很复杂,一系列日常生活仅基于相当微妙的动作,这在诸如Imutube之类的系统中对细粒度的HAR有价值,即iMutube何时突破?在这项工作中,我们首先引入了一项措施,以定量评估人类运动的微妙之处,这些运动是基本的感兴趣活动 - 运动微妙指数(MSI) - 捕获了局部像素运动并在目标虚拟传感器位置的附近发生变化,并将其与最终的活动识别相关联。然后,我们在Imutube上进行“应力测试”,并探索具有基本微妙运动的活动跨模式转移方法的作用,而对哪些活动不行。因此,本文介绍的工作使我们能够在实际情况下为Imutube应用程序绘制景观。
Previous work has demonstrated that virtual accelerometry data, extracted from videos using cross-modality transfer approaches like IMUTube, is beneficial for training complex and effective human activity recognition (HAR) models. Systems like IMUTube were originally designed to cover activities that are based on substantial body (part) movements. Yet, life is complex, and a range of activities of daily living is based on only rather subtle movements, which bears the question to what extent systems like IMUTube are of value also for fine-grained HAR, i.e., When does IMUTube break? In this work we first introduce a measure to quantitatively assess the subtlety of human movements that are underlying activities of interest--the motion subtlety index (MSI)--which captures local pixel movements and pose changes in the vicinity of target virtual sensor locations, and correlate it to the eventual activity recognition accuracy. We then perform a "stress-test" on IMUTube and explore for which activities with underlying subtle movements a cross-modality transfer approach works, and for which not. As such, the work presented in this paper allows us to map out the landscape for IMUTube applications in practical scenarios.