论文标题
内部:测量深网的偏见和面对性别生物识别技术的应用
InsideBias: Measuring Bias in Deep Networks and Application to Face Gender Biometrics
论文作者
论文摘要
这项工作探讨了基于深层神经网络体系结构的学习过程的偏见。我们通过使用MNIST数据库和从面部图像中的性别检测中进行案例研究来分析偏见如何通过玩具示例来影响深度学习过程。我们采用基于流行的深神网络的两个性别检测模型。当使用不平衡的培训数据集对模型所学的功能时,我们对偏差效应进行了全面分析。我们展示了基于面部图像的性别检测模型激活中偏差的影响。我们终于提出了内部bias,这是一种检测有偏见模型的新方法。 Insidebias基于模型如何表示信息而不是它们的执行方式,这是其他现有方法的正常实践来检测偏置检测。我们使用Insidebias的策略可以检测出很少的样品(在我们的案例研究中只有15张图像)的偏置模型。我们的实验包括来自24K身份和3个族裔的72K脸部图像。
This work explores the biases in learning processes based on deep neural network architectures. We analyze how bias affects deep learning processes through a toy example using the MNIST database and a case study in gender detection from face images. We employ two gender detection models based on popular deep neural networks. We present a comprehensive analysis of bias effects when using an unbalanced training dataset on the features learned by the models. We show how bias impacts in the activations of gender detection models based on face images. We finally propose InsideBias, a novel method to detect biased models. InsideBias is based on how the models represent the information instead of how they perform, which is the normal practice in other existing methods for bias detection. Our strategy with InsideBias allows to detect biased models with very few samples (only 15 images in our case study). Our experiments include 72K face images from 24K identities and 3 ethnic groups.