论文标题
基于EAR-EEG信号的深神经网络的解码视觉响应
Decoding Visual Responses based on Deep Neural Networks with Ear-EEG Signals
论文作者
论文摘要
最近,实用的大脑计算机界面是积极进行的,尤其是在门诊环境中。但是,脑电图信号被运动伪影和肌电图信号扭曲,这很难识别人类的意图。此外,由于硬件问题也具有挑战性,因此已经为实用的脑部计算机界面开发了EAR-EEG,并已广泛使用。但是,EAR-EEG仍然包含受污染的信号。在本文中,我们在步行条件下提出了强大的两流深神经网络,并根据统计分析和脑部计算机界面性能分析了头皮和耳朵中的视觉响应脑电图。我们用视觉响应范式,稳态视觉诱发潜力验证了信号。以1.6 m/s快速行走时,大脑计算机界面的性能为3〜14%。应用提出的方法时,CAP-EEG的精度增加了15%,EAR-EEG的精度增加了7%。所提出的方法在会话依赖性和会议对课程实验中对卧床状况显示出鲁棒性。
Recently, practical brain-computer interface is actively carried out, especially, in an ambulatory environment. However, the electroencephalography signals are distorted by movement artifacts and electromyography signals in ambulatory condition, which make hard to recognize human intention. In addition, as hardware issues are also challenging, ear-EEG has been developed for practical brain-computer interface and is widely used. However, ear-EEG still contains contaminated signals. In this paper, we proposed robust two-stream deep neural networks in walking conditions and analyzed the visual response EEG signals in the scalp and ear in terms of statistical analysis and brain-computer interface performance. We validated the signals with the visual response paradigm, steady-state visual evoked potential. The brain-computer interface performance deteriorated as 3~14% when walking fast at 1.6 m/s. When applying the proposed method, the accuracies increase 15% in cap-EEG and 7% in ear-EEG. The proposed method shows robust to the ambulatory condition in session dependent and session-to-session experiments.