论文标题
EV-TTA:基于事件的对象识别的测试时间改编
Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition
论文作者
论文摘要
我们介绍了EV-TTA,这是一种简单,有效的测试时间适应算法,用于基于事件的对象识别。虽然提议事件摄像机以快速动作或急剧的照明变化来提供场景的测量,但由于域的重大域移动,许多现有的基于事件的识别算法在极端条件下遭受了性能恶化。 EV-TTA通过在测试阶段使用受事件时空特征启发的损耗函数来微调预训练的分类器来减轻严重的域间隙。由于事件数据是测量的时间流,因此我们的损失函数对相邻事件进行了类似的预测,以快速适应在线变化的环境。同样,我们利用事件的两个极性之间的空间相关性在极端照明下处理噪声,其中不同的事件表现出独特的噪声分布。 EV-TTA在无需大量额外培训的情况下,在广泛的基于事件的对象识别任务上展示了大量的性能增长。无论输入表示如何,我们的公式都可以成功应用,并进一步扩展到回归任务中。我们预计EV-TTA将提供关键技术,以在不可避免的重大领域转移的情况下在挑战现实世界中部署基于事件的视力算法。
We introduce Ev-TTA, a simple, effective test-time adaptation algorithm for event-based object recognition. While event cameras are proposed to provide measurements of scenes with fast motions or drastic illumination changes, many existing event-based recognition algorithms suffer from performance deterioration under extreme conditions due to significant domain shifts. Ev-TTA mitigates the severe domain gaps by fine-tuning the pre-trained classifiers during the test phase using loss functions inspired by the spatio-temporal characteristics of events. Since the event data is a temporal stream of measurements, our loss function enforces similar predictions for adjacent events to quickly adapt to the changed environment online. Also, we utilize the spatial correlations between two polarities of events to handle noise under extreme illumination, where different polarities of events exhibit distinctive noise distributions. Ev-TTA demonstrates a large amount of performance gain on a wide range of event-based object recognition tasks without extensive additional training. Our formulation can be successfully applied regardless of input representations and further extended into regression tasks. We expect Ev-TTA to provide the key technique to deploy event-based vision algorithms in challenging real-world applications where significant domain shift is inevitable.