论文标题
紧急情况下的车辆人类互动行为:从交通事故视频中提取数据
Vehicle-Human Interactive Behaviors in Emergency: Data Extraction from Traffic Accident Videos
论文作者
论文摘要
当前,在紧急情况下研究车辆人类的互动行为需要在实际的紧急情况下几乎无法使用的大量数据集。关于自动驾驶汽车(AV)的现有公共数据源主要集中在正常驾驶方案或不参与的紧急情况上。为了填补这一差距并促进相关研究,本文提供了一种新的但方便的方法,可以从实际事故视频中提取交互式行为数据(即车辆和人类的轨迹),这些视频均由监视摄像机和驾驶录音机捕获。从实时事故视频中提取数据的主要挑战在于,录音摄像机是未校准的,监视的角度尚不清楚。本文提出的方法采用图像处理来获得与原始视频的角度不同的新观点。同时,我们在每个图像框架中手动检测并标记对象特征点。为了获得参考比率的梯度,在参考像素值的分析中实现了几何模型,然后根据比率梯度将特征点缩放到对象轨迹。生成的轨迹不仅完全恢复对象运动,而且还反映了基于特征点分布的车辆速度和旋转的变化。
Currently, studying the vehicle-human interactive behavior in the emergency needs a large amount of datasets in the actual emergent situations that are almost unavailable. Existing public data sources on autonomous vehicles (AVs) mainly focus either on the normal driving scenarios or on emergency situations without human involvement. To fill this gap and facilitate related research, this paper provides a new yet convenient way to extract the interactive behavior data (i.e., the trajectories of vehicles and humans) from actual accident videos that were captured by both the surveillance cameras and driving recorders. The main challenge for data extraction from real-time accident video lies in the fact that the recording cameras are un-calibrated and the angles of surveillance are unknown. The approach proposed in this paper employs image processing to obtain a new perspective which is different from the original video's perspective. Meanwhile, we manually detect and mark object feature points in each image frame. In order to acquire a gradient of reference ratios, a geometric model is implemented in the analysis of reference pixel value, and the feature points are then scaled to the object trajectory based on the gradient of ratios. The generated trajectories not only restore the object movements completely but also reflect changes in vehicle velocity and rotation based on the feature points distributions.