论文标题
TransDARC:带有潜在空间功能校准的基于变压器的驾驶员活动识别
TransDARC: Transformer-based Driver Activity Recognition with Latent Space Feature Calibration
论文作者
论文摘要
传统的基于视频的人类活动识别与深度学习的兴起有关,但由于涉及驾驶员行为的下游任务,这种效果较慢。了解车辆机舱内部的情况对于高级驾驶助理系统(ADA)至关重要,因为它可以识别出干扰,预测驾驶员的意图并导致更方便的人车相互作用。同时,驾驶员观察系统需要捕获驾驶员状态的不同粒度,而这些次级活动的复杂性随着自动化的上升和增加的驾驶员自由而增长。此外,由于传感器放置和类型因车辆而异,模型很少在与训练集中的条件相同的条件下部署,因此构成了数据驱动模型的现实生活的实质性障碍。在这项工作中,我们提出了一个基于视觉的新型框架,用于识别基于视觉变压器的次级驱动器行为和额外的增强功能分布校准模块。该模块在潜在的功能空间丰富和多样化功能级的训练集中运行,以改善对新型数据出现(例如传感器变化)和一般功能质量的概括。我们的框架始终导致更好的识别率,超过了所有粒度水平上公共驱动器和ACT基准的先前最新结果。我们的代码可在https://github.com/kpeng9510/transdarc上公开获取。
Traditional video-based human activity recognition has experienced remarkable progress linked to the rise of deep learning, but this effect was slower as it comes to the downstream task of driver behavior understanding. Understanding the situation inside the vehicle cabin is essential for Advanced Driving Assistant System (ADAS) as it enables identifying distraction, predicting driver's intent and leads to more convenient human-vehicle interaction. At the same time, driver observation systems face substantial obstacles as they need to capture different granularities of driver states, while the complexity of such secondary activities grows with the rising automation and increased driver freedom. Furthermore, a model is rarely deployed under conditions identical to the ones in the training set, as sensor placements and types vary from vehicle to vehicle, constituting a substantial obstacle for real-life deployment of data-driven models. In this work, we present a novel vision-based framework for recognizing secondary driver behaviours based on visual transformers and an additional augmented feature distribution calibration module. This module operates in the latent feature-space enriching and diversifying the training set at feature-level in order to improve generalization to novel data appearances, (e.g., sensor changes) and general feature quality. Our framework consistently leads to better recognition rates, surpassing previous state-of-the-art results of the public Drive&Act benchmark on all granularity levels. Our code is publicly available at https://github.com/KPeng9510/TransDARC.