论文标题
提高在线面部耐受性检测的时间一致性
On Improving Temporal Consistency for Online Face Liveness Detection
论文作者
论文摘要
在本文中,我们专注于改善在线面部耐受性检测系统,以增强下游面部识别系统的安全性。大多数基于框架的方法都遭受了整个时间的预测不一致。为了解决这个问题,提出了基于时间一致性的简单而有效的解决方案。具体而言,在训练阶段,除了软玛克斯的跨透明镜损失外,还提出了时间一致性限制,时间自我实施损失和级别的一致性损失。在部署阶段,开发了一个无训练的非参数不确定性估计模块,以适应预测。除了常见的评估方法之外,还提出了基于视频段的评估,以适应更实际的方案。广泛的实验表明,在各种情况下,我们的解决方案对几次演示攻击更为强大,并且在多个公共数据集上的最先进的表现至少超过了40%的ACER。此外,它的计算复杂性要少得多(减少了33%),它为低延迟在线应用程序提供了巨大的潜力。
In this paper, we focus on improving the online face liveness detection system to enhance the security of the downstream face recognition system. Most of the existing frame-based methods are suffering from the prediction inconsistency across time. To address the issue, a simple yet effective solution based on temporal consistency is proposed. Specifically, in the training stage, to integrate the temporal consistency constraint, a temporal self-supervision loss and a class consistency loss are proposed in addition to the softmax cross-entropy loss. In the deployment stage, a training-free non-parametric uncertainty estimation module is developed to smooth the predictions adaptively. Beyond the common evaluation approach, a video segment-based evaluation is proposed to accommodate more practical scenarios. Extensive experiments demonstrated that our solution is more robust against several presentation attacks in various scenarios, and significantly outperformed the state-of-the-art on multiple public datasets by at least 40% in terms of ACER. Besides, with much less computational complexity (33% fewer FLOPs), it provides great potential for low-latency online applications.