论文标题

面部检测模型有偏见吗?

Are Face Detection Models Biased?

论文作者

Mittal, Surbhi, Thakral, Kartik, Majumdar, Puspita, Vatsa, Mayank, Singh, Richa

论文摘要

在深层模型中存在偏见会导致某些人口统计亚组的不公平结果。偏见的研究主要集中在面部识别和归因预测上,而稀缺着重于面部检测。现有研究将面部检测视为“面部”和“非面”类别的二元分类。在这项工作中,我们通过面部区域定位研究面部检测领域的可能偏差,目前尚未探索。由于面部区域定位是所有面部识别管道的重要任务,因此必须分析流行的深层模型中这种偏见的存在。大多数现有的面部检测数据集都缺乏适当的注释来进行此类分析。因此,我们使用属性(F2LA)数据集进行网络策划面对面的本地化,并手动注释每张面部的10个以上属性,包括面部本地化信息。利用F2LA的广泛注释,实验设置旨在研究四个预训练的面部探测器的性能。我们观察到(i)性别和肤色的检测准确性差异很高,以及(ii)超越人口统计学的混杂因素的相互作用。可以在http://iab-rubric.org/index.php/f2la上访问F2LA数据和关联的注释。

The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源