论文标题

基于蒙版自动编码器的自我监督模型Advance CT扫描分类

Self-supervised Model Based on Masked Autoencoders Advance CT Scans Classification

论文作者

Xu, Jiashu, Stirenko, Sergii

论文摘要

自2019年以来,冠状病毒大流行一直在进行,趋势仍未减弱。因此,对医学CT扫描进行分类以协助医学诊断尤为重要。目前,有监督的深度学习算法在医疗CT扫描的分类任务中取得了巨大成功,但是医疗图像数据集通常需要专业的图像注释,许多研究数据集并未公开可用。为了解决这个问题,本文的灵感来自自我监督的学习算法MAE,并使用在Imagenet上预先训练的MAE模型在CT扫描数据集上执行转移学习。该方法改善了模型的概括性能,并避免了在小数据集上过度拟合的风险。通过对Covid-CT数据集和SARS-COV-2数据集进行的广泛实验,我们将基于SSL的方法与其他最先进的基于学习的基于学习的预处理方法进行了比较。实验结果表明,我们的方法更有效地提高了模型的概括性能,并避免了在小数据集上过度拟合的风险。该模型的准确性与两个测试数据集的监督学习的准确性几乎相同。最后,消融实验旨在充分证明我们方法的有效性及其工作方式。

The coronavirus pandemic has been going on since the year 2019, and the trend is still not abating. Therefore, it is particularly important to classify medical CT scans to assist in medical diagnosis. At present, Supervised Deep Learning algorithms have made a great success in the classification task of medical CT scans, but medical image datasets often require professional image annotation, and many research datasets are not publicly available. To solve this problem, this paper is inspired by the self-supervised learning algorithm MAE and uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset. This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets. Through extensive experiments on the COVID-CT dataset and the SARS-CoV-2 dataset, we compare the SSL-based method in this paper with other state-of-the-art supervised learning-based pretraining methods. Experimental results show that our method improves the generalization performance of the model more effectively and avoids the risk of overfitting on small datasets. The model achieved almost the same accuracy as supervised learning on both test datasets. Finally, ablation experiments aim to fully demonstrate the effectiveness of our method and how it works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源