论文标题
WEMAC:女性和情感多模式情感计算数据集
WEMAC: Women and Emotion Multi-modal Affective Computing dataset
论文作者
论文摘要
在2030年议程中提议并由所有联合国成员国采用的17个可持续发展目标(SDG)中,第五SDG呼吁采取行动,将性别平等变成基本的人类权利,并为更美好的世界提供基础。它包括根除对妇女的各种暴力行为。在这种情况下,UC3M4SAFETY研究团队旨在发展Bindi。这是一个网络物理系统,其中包括嵌入式人工智能算法,用于用户实时监控情感状态,其最终目的是实现早期发现女性风险情况的目的。在此基础上,我们利用可穿戴的情感计算,包括智能传感器,数据加密,以安全,准确地收集假定的犯罪证据,以及与保护代理的远程连接。为了开发这种系统,不同实验室和野外数据集的记录正在进行中。这些包含在UC3M4SAFETY数据库中。因此,本文介绍并详细介绍了Wemac的首次发行,Wemac是一种新型的多模式数据集,该数据集构成了一个基于实验室的实验,该实验是针对47名女性志愿者的实验,这些实验经历了经过验证的视听刺激,以通过使用生理,语音信号和自我报道和掌握的虚拟现实头戴式耳机来诱导真实的情感,以诱发真实的情感。我们认为,该数据集将使用生理和语音信息来为多模式情感计算的研究提供服务。
Among the seventeen Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the Fifth SDG is a call for action to turn Gender Equality into a fundamental human right and an essential foundation for a better world. It includes the eradication of all types of violence against women. Within this context, the UC3M4Safety research team aims to develop Bindi. This is a cyber-physical system which includes embedded Artificial Intelligence algorithms, for user real-time monitoring towards the detection of affective states, with the ultimate goal of achieving the early detection of risk situations for women. On this basis, we make use of wearable affective computing including smart sensors, data encryption for secure and accurate collection of presumed crime evidence, as well as the remote connection to protecting agents. Towards the development of such system, the recordings of different laboratory and into-the-wild datasets are in process. These are contained within the UC3M4Safety Database. Thus, this paper presents and details the first release of WEMAC, a novel multi-modal dataset, which comprises a laboratory-based experiment for 47 women volunteers that were exposed to validated audio-visual stimuli to induce real emotions by using a virtual reality headset while physiological, speech signals and self-reports were acquired and collected. We believe this dataset will serve and assist research on multi-modal affective computing using physiological and speech information.