论文标题
预测基于自适应图形跟踪的曲目
Tracklets Predicting Based Adaptive Graph Tracking
论文作者
论文摘要
大多数现有的跟踪方法使用特征余弦距离和盒子重叠的线性组合将检测到的框与轨迹联系起来。但是,仍然存在两个不同帧中对象特征不一致的问题。此外,当提取功能时,仅利用外观信息,均未考虑位置关系和曲目的信息。我们提出了用于多对象跟踪的准确端到端学习框架,即\ textbf {tpagt}。它根据运动预测重新提取当前帧中曲目的特征,这是解决不一致的特征问题的关键。 TPAGT中的自适应图神经网络被用于融合位置,外观和历史信息,并在区分不同对象中起着重要作用。在训练阶段,我们提出了平衡的MSE损失,以成功克服不平衡的样本。实验表明,我们的方法达到了最先进的性能。它在MOT16挑战中达到了76.5 \%MOTA,在MOT17挑战中达到了76.2 \%MOTA。
Most of the existing tracking methods link the detected boxes to the tracklets using a linear combination of feature cosine distances and box overlap. But the problem of inconsistent features of an object in two different frames still exists. In addition, when extracting features, only appearance information is utilized, neither the location relationship nor the information of the tracklets is considered. We present an accurate and end-to-end learning framework for multi-object tracking, namely \textbf{TPAGT}. It re-extracts the features of the tracklets in the current frame based on motion predicting, which is the key to solve the problem of features inconsistent. The adaptive graph neural network in TPAGT is adopted to fuse locations, appearance, and historical information, and plays an important role in distinguishing different objects. In the training phase, we propose the balanced MSE LOSS to successfully overcome the unbalanced samples. Experiments show that our method reaches state-of-the-art performance. It achieves 76.5\% MOTA on the MOT16 challenge and 76.2\% MOTA on the MOT17 challenge.