论文标题

改善从脑电图数据中提高癫痫发作检测的自我监督预审计模型

Improving self-supervised pretraining models for epileptic seizure detection from EEG data

论文作者

Das, Sudip, Pandey, Pankaj, Miyapuram, Krishna Prasad

论文摘要

互联网上有大量的医疗数据,其中大多数是未标记的。传统的监督学习算法通常受到标记数据量的限制,尤其是在医疗领域中,在医疗领域,在人类处理和标签所需的专业专家方面,标签成本很高。它们也容易出现人为错误,并偏向少数专家注释者标记它们。这些问题是通过自我选择来减轻这些问题的,我们通过查看数据本身从未标记的数据中生成伪标记。本文介绍了各种自我实施策略,以增强基于时间序列的扩散卷积复发性神经网络(DCRNN)模型的性能。可以将自学预处理阶段中的学习权重转移到监督训练阶段,以提高模型的预测能力。我们的技术经过扩散卷积复发性神经网络(DCRNN)模型的扩展,该模型是带有图扩散卷积的RNN,该模型对EEG信号中存在的时空依赖性进行了建模。当从训练阶段的学习权重转移到DCRNN模型中,以确定脑电图时间窗口是否具有与之相关的特征性癫痫发作信号时,我们的方法比Tuh EEG eeg seizure copus上的当前最新最新模型产生的AUROC评分$ 1.56 \%$。

There is abundant medical data on the internet, most of which are unlabeled. Traditional supervised learning algorithms are often limited by the amount of labeled data, especially in the medical domain, where labeling is costly in terms of human processing and specialized experts needed to label them. They are also prone to human error and biased as a select few expert annotators label them. These issues are mitigated by Self-supervision, where we generate pseudo-labels from unlabelled data by seeing the data itself. This paper presents various self-supervision strategies to enhance the performance of a time-series based Diffusion convolution recurrent neural network (DCRNN) model. The learned weights in the self-supervision pretraining phase can be transferred to the supervised training phase to boost the model's prediction capability. Our techniques are tested on an extension of a Diffusion Convolutional Recurrent Neural network (DCRNN) model, an RNN with graph diffusion convolutions, which models the spatiotemporal dependencies present in EEG signals. When the learned weights from the pretraining stage are transferred to a DCRNN model to determine whether an EEG time window has a characteristic seizure signal associated with it, our method yields an AUROC score $1.56\%$ than the current state-of-the-art models on the TUH EEG seizure corpus.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源