论文标题

MaxStyle:可靠的医疗图像分割的对抗性样式组成

MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation

论文作者

Chen, Chen, Li, Zeju, Ouyang, Cheng, Sinclair, Matt, Bai, Wenjia, Rueckert, Daniel

论文摘要

卷积神经网络(CNN)在基准数据集上实现了出色的分割精度,在该数据集中训练和测试集来自同一领域,但它们的性能可以显着降低看不见的领域,这在许多临床场景中都阻碍了CNN的部署。大多数现有的作品通过收集多域数据集进行培训来改善模型外域(OOD)鲁棒性,这很昂贵,由于隐私和后勤问题,这很昂贵,并且可能并不总是可行的。在这项工作中,我们专注于仅使用单域数据集提高模型鲁棒性。我们提出了一个名为MaxStyle的新型数据增强框架,该框架最大程度地提高了模型OOD性能的样式增强功能。它将辅助风格的图像解码器附加到用于鲁棒特征学习和数据增强的分割网络。重要的是,MaxStyle通过通过噪音来扩展样式空间并通过对抗性训练来扩大风格的多样性和硬度的改进数据。通过对多个公共心脏和前列腺MR数据集进行了广泛的实验,我们证明了MaxStyle可显着改善对看不见的腐败的稳健性,以及在低训练数据和高训练数据下的多个,不同的,未见的站点和未知的图像序列之间的共同分布变化。该代码可以在https://github.com/cherise215/maxstyle上找到。

Convolutional neural networks (CNNs) have achieved remarkable segmentation accuracy on benchmark datasets where training and test sets are from the same domain, yet their performance can degrade significantly on unseen domains, which hinders the deployment of CNNs in many clinical scenarios. Most existing works improve model out-of-domain (OOD) robustness by collecting multi-domain datasets for training, which is expensive and may not always be feasible due to privacy and logistical issues. In this work, we focus on improving model robustness using a single-domain dataset only. We propose a novel data augmentation framework called MaxStyle, which maximizes the effectiveness of style augmentation for model OOD performance. It attaches an auxiliary style-augmented image decoder to a segmentation network for robust feature learning and data augmentation. Importantly, MaxStyle augments data with improved image style diversity and hardness, by expanding the style space with noise and searching for the worst-case style composition of latent features via adversarial training. With extensive experiments on multiple public cardiac and prostate MR datasets, we demonstrate that MaxStyle leads to significantly improved out-of-distribution robustness against unseen corruptions as well as common distribution shifts across multiple, different, unseen sites and unknown image sequences under both low- and high-training data settings. The code can be found at https://github.com/cherise215/MaxStyle.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源