论文标题
在3D-BCI训练环境下使用卷积神经网络对直观视觉运动图像进行解码
Decoding of Intuitive Visual Motion Imagery Using Convolutional Neural Network under 3D-BCI Training Environment
论文作者
论文摘要
在这项研究中,我们采用了视觉运动图像,这是一种更直观的脑部计算机界面(BCI)范式,用于解码直观的用户意图。我们开发了一个三维的BCI训练平台,并应用了它来帮助用户在视觉运动图像实验中进行更直观的想象力。根据我们在日常生活中通常使用的动作选择实验任务,例如拿起电话,打开门,吃食物和倒水。九个受试者参加了我们的实验。我们提供了统计证据,表明视觉运动图像与前额叶和枕叶具有很高的相关性。此外,我们使用功能连通性方法选择了最合适的脑电图通道,以进行视觉运动图像解码,并提出了用于分类的卷积神经网络体系结构。结果,在所有受试者中,从16个渠道的4个类别的4个类别的拟议体系结构的平均分类性能为67.50%。该结果令人鼓舞,它显示了开发基于BCI的设备控制系统,例如神经假体和机器人臂。
In this study, we adopted visual motion imagery, which is a more intuitive brain-computer interface (BCI) paradigm, for decoding the intuitive user intention. We developed a 3-dimensional BCI training platform and applied it to assist the user in performing more intuitive imagination in the visual motion imagery experiment. The experimental tasks were selected based on the movements that we commonly used in daily life, such as picking up a phone, opening a door, eating food, and pouring water. Nine subjects participated in our experiment. We presented statistical evidence that visual motion imagery has a high correlation from the prefrontal and occipital lobes. In addition, we selected the most appropriate electroencephalography channels using a functional connectivity approach for visual motion imagery decoding and proposed a convolutional neural network architecture for classification. As a result, the averaged classification performance of the proposed architecture for 4 classes from 16 channels was 67.50 % across all subjects. This result is encouraging, and it shows the possibility of developing a BCI-based device control system for practical applications such as neuroprosthesis and a robotic arm.