论文标题

在可区分框架中优化对比度学习的转换

Optimizing transformations for contrastive learning in a differentiable framework

论文作者

Ruppli, Camille, Gori, Pietro, Ardon, Roberto, Bloch, Isabelle

论文摘要

当前的对比学习方法使用从大量转换列表(固定的超参数)中采样的随机转换,从未注释的数据库中学习不变性。遵循引入少量监督的先前工作,我们提出了一个框架,以找到使用可区分转换网络的对比度学习的最佳转换。我们的方法在监督准确性和收敛速度方面都在低注释的数据制度下提高了性能。与以前的工作相反,转换优化不需要生成模型。转换的图像保留相关信息以解决监督任务,并在此处进行分类。在34000 2D切片的大脑磁共振图像和11200胸X射线图像上进行实验。在两个数据集(具有标记数据的10%)上,我们的模型比具有100%标签的完全监督模型都能取得更好的性能。

Current contrastive learning methods use random transformations sampled from a large list of transformations, with fixed hyperparameters, to learn invariance from an unannotated database. Following previous works that introduce a small amount of supervision, we propose a framework to find optimal transformations for contrastive learning using a differentiable transformation network. Our method increases performances at low annotated data regime both in supervision accuracy and in convergence speed. In contrast to previous work, no generative model is needed for transformation optimization. Transformed images keep relevant information to solve the supervised task, here classification. Experiments were performed on 34000 2D slices of brain Magnetic Resonance Images and 11200 chest X-ray images. On both datasets, with 10% of labeled data, our model achieves better performances than a fully supervised model with 100% labels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源