论文标题

G-LBM:视频序列的生成低维背景模型估计

G-LBM:Generative Low-dimensional Background Model Estimation from Video Sequences

论文作者

Rezaei, Behnaz, Farnoosh, Amirreza, Ostadabbas, Sarah

论文摘要

在本文中,我们提出了一个可计算的可拖动和理论支持的非线性低维生成模型,以在存在噪声和稀疏异常值的情况下表示现实世界中的数据。数据的非线性低维流形发现数据是通过描述观测值的联合分布及其低维表示(即歧管坐标)来完成的。我们的模型称为生成低维背景模型(G-LBM),允许有关歧管坐标分布的变异操作,并同时生成了数据的潜在歧管的低级别结构。因此,我们的概率模型包含非稳定的低维流形学习的直觉。 G-LBM选择了观测值的底层歧管的固有维度,其概率性质对观测数据中的噪声进行了建模。 G-LBM在视频序列的背景场景模型估算中具有直接应用程序,我们在SBMNET-2016和BMC2012数据集中评估了其性能,在该数据集中,它的性能与其他最先进的方法更高或可比,同时与视频中的背景场景不可知。此外,在诸如摄像机抖动和背景运动之类的挑战中,G-LBM能够通过在这些情况下有效地对视频观察中的不确定性进行建模,从而可以稳健地估计背景。

In this paper, we propose a computationally tractable and theoretically supported non-linear low-dimensional generative model to represent real-world data in the presence of noise and sparse outliers. The non-linear low-dimensional manifold discovery of data is done through describing a joint distribution over observations, and their low-dimensional representations (i.e. manifold coordinates). Our model, called generative low-dimensional background model (G-LBM) admits variational operations on the distribution of the manifold coordinates and simultaneously generates a low-rank structure of the latent manifold given the data. Therefore, our probabilistic model contains the intuition of the non-probabilistic low-dimensional manifold learning. G-LBM selects the intrinsic dimensionality of the underling manifold of the observations, and its probabilistic nature models the noise in the observation data. G-LBM has direct application in the background scenes model estimation from video sequences and we have evaluated its performance on SBMnet-2016 and BMC2012 datasets, where it achieved a performance higher or comparable to other state-of-the-art methods while being agnostic to the background scenes in videos. Besides, in challenges such as camera jitter and background motion, G-LBM is able to robustly estimate the background by effectively modeling the uncertainties in video observations in these scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源