论文标题

无人机板上视觉对象跟踪的3D环境集成

Integration of the 3D Environment for UAV Onboard Visual Object Tracking

论文作者

Vujasinović, Stéphane, Becker, Stefan, Breuer, Timo, Bullinger, Sebastian, Scherer-Negenborn, Norbert, Arens, Michael

论文摘要

来自无人机(UAV)的单个视觉对象跟踪提出了基本挑战,例如对象遮挡,小规模的对象,背景混乱和突然的摄像头运动。为了解决这些困难,我们建议将观察到的场景的3D结构整合到逐跟踪算法中。我们引入了一条管道,该管道结合了一个无模型的视觉对象跟踪器,稀疏的3D重建和状态估计器。场景的3D重建是使用基于图像的结构 - 运动(SFM)组件计算的,该组件使我们能够在跟踪过程中利用相应3D场景中的状态估计器。通过表示目标在3D空间而不是在图像空间中的位置,我们可以稳定自我动机期间的跟踪,并改善闭塞,背景混乱和小规模对象的处理。我们评估了我们对原型图像序列的方法,该方法是从具有低空倾斜视图的无人机捕获的。为此,我们调整了现有的数据集以进行视觉对象跟踪,并在3D中重建了观察到的场景。实验结果表明,所提出的方法使用普通的视觉提示和利用基于图像空间的状态估计的方法优于方法。我们认为,我们的方法对交通监控,视频监视和导航可能是有益的。

Single visual object tracking from an unmanned aerial vehicle (UAV) poses fundamental challenges such as object occlusion, small-scale objects, background clutter, and abrupt camera motion. To tackle these difficulties, we propose to integrate the 3D structure of the observed scene into a detection-by-tracking algorithm. We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator. The 3D reconstruction of the scene is computed with an image-based Structure-from-Motion (SfM) component that enables us to leverage a state estimator in the corresponding 3D scene during tracking. By representing the position of the target in 3D space rather than in image space, we stabilize the tracking during ego-motion and improve the handling of occlusions, background clutter, and small-scale objects. We evaluated our approach on prototypical image sequences, captured from a UAV with low-altitude oblique views. For this purpose, we adapted an existing dataset for visual object tracking and reconstructed the observed scene in 3D. The experimental results demonstrate that the proposed approach outperforms methods using plain visual cues as well as approaches leveraging image-space-based state estimations. We believe that our approach can be beneficial for traffic monitoring, video surveillance, and navigation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源