论文标题

柔性模态脸反启动:基准测试

Flexible-Modal Face Anti-Spoofing: A Benchmark

论文作者

Yu, Zitong, Liu, Ajian, Zhao, Chenxu, Cheng, Kevin H. M., Cheng, Xu, Zhao, Guoying

论文摘要

面部反欺骗(FAS)在保护面部识别系统免于演示攻击方面起着至关重要的作用。受益于成熟的摄像头传感器,单模式(RGB)和多模式(例如,RGB+DEPTH)FAS,已在具有不同的传感器/模态配置的各种情况下应用。现有的单模式FAS方法通常会为每种可能的模态方案分别训练和部署模型,这可能是冗余且效率低下的。我们可以训练统一模型,并在各种模式方案下灵活部署它吗?在本文中,我们建立了第一个具有“所有人的训练训练”原则的灵活模式FAS基准。具体来说,使用训练有素的多模式(RGB+DEPTH+IR)FAS模型,在四个柔性模式子协议(RGB,RGB+DEPTH,RGB+IR和RGB+DEPTH+IR)上都进行了内部和交叉数据库测试。我们还研究了柔性模式FAS的普遍的深层模型和功能融合策略。我们希望这个新的基准将促进多模式FAS的未来研究。协议和代码可在https://github.com/zitongyu/flex-modal-fas上找到。

Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks. Benefitted from the maturing camera sensors, single-modal (RGB) and multi-modal (e.g., RGB+Depth) FAS has been applied in various scenarios with different configurations of sensors/modalities. Existing single- and multi-modal FAS methods usually separately train and deploy models for each possible modality scenario, which might be redundant and inefficient. Can we train a unified model, and flexibly deploy it under various modality scenarios? In this paper, we establish the first flexible-modal FAS benchmark with the principle `train one for all'. To be specific, with trained multi-modal (RGB+Depth+IR) FAS models, both intra- and cross-dataset testings are conducted on four flexible-modal sub-protocols (RGB, RGB+Depth, RGB+IR, and RGB+Depth+IR). We also investigate prevalent deep models and feature fusion strategies for flexible-modal FAS. We hope this new benchmark will facilitate the future research of the multi-modal FAS. The protocols and codes are available at https://github.com/ZitongYu/Flex-Modal-FAS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源