论文标题

视频异常检测的基于时空的上下文融合

Spatio-Temporal-based Context Fusion for Video Anomaly Detection

论文作者

Hu, Chao, Qiu, Weibin, Wu, Weijie, Zhu, Liqiang

论文摘要

视频异常检测旨在发现视频中异常事件,而主要对象是人和车辆等目标对象。视频数据中的每个目标都具有丰富的时空上下文信息。大多数现有方法仅集中在时间上下文上,忽略了空间环境在异常检测中的作用。空间上下文信息表示检测目标与周围目标之间的关系。异常检测很有意义。为此,提出了基于目标时空上下文融合的视频异常检测算法。首先,通过目标检测网络提取视频框架中的目标,以减少背景干扰。然后计算两个相邻帧的光流图。运动功能在视频框架中使用多个目标,以同时构建空间上下文,重新编码目标外观和运动特征,最后通过时空双流式网络重建上述特征,并使用重建误差来表示异常得分。该算法分别在UCSDPED2和Avenue数据集上实现了98.5%和86.3%的帧级AUC。在UCSDPED2数据集上,与时间和空间流网络相比,时空双流网络分别提高了5.1%和0.3%。使用空间上下文编码后,帧级AUC的增强1%,这验证了该方法的有效性。

Video anomaly detection aims to discover abnormal events in videos, and the principal objects are target objects such as people and vehicles. Each target in the video data has rich spatio-temporal context information. Most existing methods only focus on the temporal context, ignoring the role of the spatial context in anomaly detection. The spatial context information represents the relationship between the detection target and surrounding targets. Anomaly detection makes a lot of sense. To this end, a video anomaly detection algorithm based on target spatio-temporal context fusion is proposed. Firstly, the target in the video frame is extracted through the target detection network to reduce background interference. Then the optical flow map of two adjacent frames is calculated. Motion features are used multiple targets in the video frame to construct spatial context simultaneously, re-encoding the target appearance and motion features, and finally reconstructing the above features through the spatio-temporal dual-stream network, and using the reconstruction error to represent the abnormal score. The algorithm achieves frame-level AUCs of 98.5% and 86.3% on the UCSDped2 and Avenue datasets, respectively. On the UCSDped2 dataset, the spatio-temporal dual-stream network improves frames by 5.1% and 0.3%, respectively, compared to the temporal and spatial stream networks. After using spatial context encoding, the frame-level AUC is enhanced by 1%, which verifies the method's effectiveness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源