论文标题
无监督的实例从语音信号中进行抑郁症检测的歧视性学习
Unsupervised Instance Discriminative Learning for Depression Detection from Speech Signals
论文作者
论文摘要
重度抑郁症(MDD)是一种严重的疾病,会影响数百万的人,并且尽早诊断这种疾病至关重要。从语音信号中检测抑郁症可能对医生有很大帮助,并且可以在没有任何侵入性手术的情况下完成。由于相关标记的数据很少,因此我们提出了一种修改的实例判别学习方法(IDL)方法,一种无监督的预训练技术,以提取增强不变性和实例扩张的嵌入。在学习增强不变的嵌入方式方面,研究了各种语音数据增强方法,并且时间掩盖的产量是最佳性能。为了学习实例扩展的嵌入,我们探讨了用于训练批次的采样实例的方法(不同的基于扬声器和随机抽样)。据发现,基于说话者的独特采样可提供比随机的表演更好的性能,我们假设该结果是因为相关的说话者信息保留在嵌入中。此外,我们提出了一种基于聚类算法的新型抽样策略,基于伪实例的采样(PI),以增强嵌入的扩散特性。实验是在Daic-WoZ(英语)和收敛(普通话)数据集上使用Depaudionet进行的,并且在统计上有显着的改进,分别使用PIS在没有预先培训的情况下使用PIS检测MDD,p值为0.0015和0.05。
Major Depressive Disorder (MDD) is a severe illness that affects millions of people, and it is critical to diagnose this disorder as early as possible. Detecting depression from voice signals can be of great help to physicians and can be done without any invasive procedure. Since relevant labelled data are scarce, we propose a modified Instance Discriminative Learning (IDL) method, an unsupervised pre-training technique, to extract augment-invariant and instance-spread-out embeddings. In terms of learning augment-invariant embeddings, various data augmentation methods for speech are investigated, and time-masking yields the best performance. To learn instance-spread-out embeddings, we explore methods for sampling instances for a training batch (distinct speaker-based and random sampling). It is found that the distinct speaker-based sampling provides better performance than the random one, and we hypothesize that this result is because relevant speaker information is preserved in the embedding. Additionally, we propose a novel sampling strategy, Pseudo Instance-based Sampling (PIS), based on clustering algorithms, to enhance spread-out characteristics of the embeddings. Experiments are conducted with DepAudioNet on DAIC-WOZ (English) and CONVERGE (Mandarin) datasets, and statistically significant improvements, with p-value 0.0015 and 0.05, respectively, are observed using PIS in the detection of MDD relative to the baseline without pre-training.