论文标题

扬声器适应符号和老年语音识别的光谱时期深度特征

Speaker Adaptation Using Spectro-Temporal Deep Features for Dysarthric and Elderly Speech Recognition

论文作者

Geng, Mengzhe, Xie, Xurong, Ye, Zi, Wang, Tianzi, Li, Guinan, Hu, Shujie, Liu, Xunying, Meng, Helen

论文摘要

尽管近几十年来针对正常语音的自动语音识别(ASR)技术取得了迅速的进展,但迄今为止,准确认识违规和老年人语音仍然是高度挑战的任务。在正常语音中常见的异质性来源,包括口音或性别,当与年龄和言语病理严重程度的变异性进一步增加时,会在说话者之间产生巨大的多样性。为此,扬声器改编技术在此类用户的ASR系统个性化中起着关键作用。由违反律师,老年人和正常言语之间的光谱时间差异激励,系统地表现出在关节的不精确,减少和清晰度的降低和清晰度,较慢的说话速度和增加的功能障碍,新型光谱子空间基础空间基础基础基础基础嵌入使用SVD语音分解的范围启发性的特征,该特征是基于SVD语音分解的特征。混合DNN/TDNN和端到端构象异构识别系统。对四个任务进行了实验:英语UaSteech和Torgo违反语音语音Corpora;英国痴呆症皮特和广东话JCCOCC MOCA老年语音数据集。所提出的光谱深度特征适应系统的表现优于基线I-vector和XVECTOR适应的绝对绝对(相对相对相对8.63%)的单词错误率(WER)降低高达2.63%。在使用学习隐藏单元贡献(LHUC)的基于模型的扬声器适应后,保留了一致的性能改进。使用拟议的光谱基嵌入功能的最佳扬声器调整系统在16个屈服扬声器的Uapeech测试集上产生了最低的25.05%的发行。

Despite the rapid progress of automatic speech recognition (ASR) technologies targeting normal speech in recent decades, accurate recognition of dysarthric and elderly speech remains highly challenging tasks to date. Sources of heterogeneity commonly found in normal speech including accent or gender, when further compounded with the variability over age and speech pathology severity level, create large diversity among speakers. To this end, speaker adaptation techniques play a key role in personalization of ASR systems for such users. Motivated by the spectro-temporal level differences between dysarthric, elderly and normal speech that systematically manifest in articulatory imprecision, decreased volume and clarity, slower speaking rates and increased dysfluencies, novel spectrotemporal subspace basis deep embedding features derived using SVD speech spectrum decomposition are proposed in this paper to facilitate auxiliary feature based speaker adaptation of state-of-the-art hybrid DNN/TDNN and end-to-end Conformer speech recognition systems. Experiments were conducted on four tasks: the English UASpeech and TORGO dysarthric speech corpora; the English DementiaBank Pitt and Cantonese JCCOCC MoCA elderly speech datasets. The proposed spectro-temporal deep feature adapted systems outperformed baseline i-Vector and xVector adaptation by up to 2.63% absolute (8.63% relative) reduction in word error rate (WER). Consistent performance improvements were retained after model based speaker adaptation using learning hidden unit contributions (LHUC) was further applied. The best speaker adapted system using the proposed spectral basis embedding features produced the lowest published WER of 25.05% on the UASpeech test set of 16 dysarthric speakers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源