论文标题

感受:使用脑电图检测情绪检测的深度学习框架

TSception: A Deep Learning Framework for Emotion Detection Using EEG

论文作者

Ding, Yi, Robinson, Neethu, Zeng, Qiuhao, Chen, Duo, Wai, Aung Aung Phyo, Lee, Tih-Shih, Guan, Cuntai

论文摘要

在本文中,我们提出了一个深度学习框架,即感受,以从脑电图(EEG)中检测到情感。感受由时间和空间卷积层组成,它们同时学习时间和通道域中的歧视性表示。时间学习者由多尺度的1D卷积内核组成,其长度与EEG信号的采样率有关,EEG信号学习了多个时间和频率表示。空间学习者利用额叶区域的情绪反应的不对称特性,从大脑的左和右半球学习判别性表示。在我们的研究中,一个系统旨在研究沉浸式虚拟现实(VR)环境中的情绪唤醒。使用该系统从18位健康受试者中收集了脑电图数据,以评估所提出的深度学习网络的性能,以分类低情绪和高情绪唤醒状态。将所提出的方法与SVM,EEGNET和LSTM进行比较。 Tspection的高分类精度为86.03%,这表现明显优于先前的方法(p <0.05)。该代码可从https://github.com/deepbrains/tspeption获得

In this paper, we propose a deep learning framework, TSception, for emotion detection from electroencephalogram (EEG). TSception consists of temporal and spatial convolutional layers, which learn discriminative representations in the time and channel domains simultaneously. The temporal learner consists of multi-scale 1D convolutional kernels whose lengths are related to the sampling rate of the EEG signal, which learns multiple temporal and frequency representations. The spatial learner takes advantage of the asymmetry property of emotion responses at the frontal brain area to learn the discriminative representations from the left and right hemispheres of the brain. In our study, a system is designed to study the emotional arousal in an immersive virtual reality (VR) environment. EEG data were collected from 18 healthy subjects using this system to evaluate the performance of the proposed deep learning network for the classification of low and high emotional arousal states. The proposed method is compared with SVM, EEGNet, and LSTM. TSception achieves a high classification accuracy of 86.03%, which outperforms the prior methods significantly (p<0.05). The code is available at https://github.com/deepBrains/TSception

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源