论文标题
体积医学图像细分的联合对比度学习
Federated Contrastive Learning for Volumetric Medical Image Segmentation
论文作者
论文摘要
有监督的深度学习需要大量标记的数据才能实现高性能。但是,在医学成像分析中,每个站点可能只有有限的数据和标签,这使得学习无效。联合学习(FL)可以通过学习共享模型的同时在本地培训数据以供隐私来帮助这方面。传统FL需要全标签的数据进行培训,由于高标签成本和专业知识的要求,这是不方便或有时不可行的。对比度学习(CL)作为一种自我监督的学习方法,可以有效地从未标记的数据中学习,以预先培训神经网络编码器,然后对有限的注释的下游任务进行微调。但是,在FL中采用CL时,每个客户的数据多样性有限,使联合对比度学习(FCL)无效。在这项工作中,我们提出了一个用于体积医学图像分割的FCL框架,注释有限。更具体地说,我们在FCL预训练过程中交换功能,以便向每个站点提供各种对比度数据,以进行有效的本地CL,同时将原始数据私有。基于交换的功能,全局结构匹配进一步利用与局部特征与远程特征保持一致的结构相似性,从而可以在不同站点之间学习统一的特征空间。与最先进的技术相比,在心脏MRI数据集上的实验显示了所提出的框架大大提高了分割性能。
Supervised deep learning needs a large amount of labeled data to achieve high performance. However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective. Federated learning (FL) can help in this regard by learning a shared model while keeping training data local for privacy. Traditional FL requires fully-labeled data for training, which is inconvenient or sometimes infeasible to obtain due to high labeling cost and the requirement of expertise. Contrastive learning (CL), as a self-supervised learning approach, can effectively learn from unlabeled data to pre-train a neural network encoder, followed by fine-tuning for downstream tasks with limited annotations. However, when adopting CL in FL, the limited data diversity on each client makes federated contrastive learning (FCL) ineffective. In this work, we propose an FCL framework for volumetric medical image segmentation with limited annotations. More specifically, we exchange the features in the FCL pre-training process such that diverse contrastive data are provided to each site for effective local CL while keeping raw data private. Based on the exchanged features, global structural matching further leverages the structural similarity to align local features to the remote ones such that a unified feature space can be learned among different sites. Experiments on a cardiac MRI dataset show the proposed framework substantially improves the segmentation performance compared with state-of-the-art techniques.