论文标题

使用深层残留和卷积网络对fMRI数据上的情绪大脑状态分类

Emotional Brain State Classification on fMRI Data Using Deep Residual and Convolutional Networks

论文作者

Tchibozo, Maxime, Kim, Donggeun, Wang, Zijing, He, Xiaofu

论文摘要

功能MRI(fMRI)数据的情绪大脑状态分类的目的是识别与受试者在实验过程中执行的特定情绪任务相关的大脑活动模式。通过两个因素,使用fMRI数据将情绪大脑状态与其他大脑状态区分开是具有挑战性的:在短时间内产生快速而准确的预测很难,并且很难提取概括地看不见主题的情感特征。为了应对这些挑战,我们进行了一个实验,其中22名受试者查看了旨在刺激负面,中性或休息情绪反应的图片,同时使用fMRI测量了他们的大脑活动。然后,我们开发了两种不同的基于卷积的方法来解码情绪大脑状态,仅使用来自单个,最小的预处理(切片的时机和重新调整)fMRI量的空间信息。在我们的第一种方法中,我们使用单向方差(ANOVA)Voxel选择与超容器结合使用了1D卷积网络(84.9%的精度; 33%的机会水平33%),以对3个情绪条件进行分类。在我们的第二种方法中,我们培训了3D Resnet-50型号(78.0%精度;机会水平50%),直接从单个3D fMRI卷中对2个情绪条件进行分类。我们的卷积和残留分类器成功地学习了群体级的情感特征,并可以从毫秒中的fimri卷中解码情绪条件。这些方法可能可能用于大脑计算机界面和实时fMRI神经反馈研究。

The goal of emotional brain state classification on functional MRI (fMRI) data is to recognize brain activity patterns related to specific emotion tasks performed by subjects during an experiment. Distinguishing emotional brain states from other brain states using fMRI data has proven to be challenging due to two factors: a difficulty to generate fast yet accurate predictions in short time frames, and a difficulty to extract emotion features which generalize to unseen subjects. To address these challenges, we conducted an experiment in which 22 subjects viewed pictures designed to stimulate either negative, neutral or rest emotional responses while their brain activity was measured using fMRI. We then developed two distinct Convolution-based approaches to decode emotional brain states using only spatial information from single, minimally pre-processed (slice timing and realignment) fMRI volumes. In our first approach, we trained a 1D Convolutional Network (84.9% accuracy; chance level 33%) to classify 3 emotion conditions using One-way Analysis of Variance (ANOVA) voxel selection combined with hyperalignment. In our second approach, we trained a 3D ResNet-50 model (78.0% accuracy; chance level 50%) to classify 2 emotion conditions from single 3D fMRI volumes directly. Our Convolutional and Residual classifiers successfully learned group-level emotion features and could decode emotion conditions from fMRI volumes in milliseconds. These approaches could potentially be used in brain computer interfaces and real-time fMRI neurofeedback research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源