论文标题

多模式情感检测算法的偏见和公平性

Bias and Fairness on Multimodal Emotion Detection Algorithms

论文作者

Schmitz, Matheus, Ahmed, Rehan, Cao, Jimi

论文摘要

大量研究表明,机器学习算法可以锁定在受保护的属性上,例如种族和性别,并产生对一个或多个组有系统区分的预测。迄今为止,大多数偏见和公平研究一直在单峰模型上。在这项工作中,我们探讨了情感识别系统中与使用方式相关的情感识别系统中存在的偏见,并研究了多模式方法如何影响系统偏见和公平性。我们考虑音频,文本和视频方式,以及所有可能的多模式组合,并发现单独的文本具有最小的偏见,并说明了大多数模型的表演,这引起了对偏见和公平性的多模式情感识别系统的价值的疑问,而模型表现出色。

Numerous studies have shown that machine learning algorithms can latch onto protected attributes such as race and gender and generate predictions that systematically discriminate against one or more groups. To date the majority of bias and fairness research has been on unimodal models. In this work, we explore the biases that exist in emotion recognition systems in relationship to the modalities utilized, and study how multimodal approaches affect system bias and fairness. We consider audio, text, and video modalities, as well as all possible multimodal combinations of those, and find that text alone has the least bias, and accounts for the majority of the models' performances, raising doubts about the worthiness of multimodal emotion recognition systems when bias and fairness are desired alongside model performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源