论文标题
自我监督的班级认知少量分类
Self-Supervised Class-Cognizant Few-Shot Classification
论文作者
论文摘要
无监督的学习被认为是人类智力的暗物质。为了朝这个方向构建,本文着重于从大量未标记的数据中学习无监督的学习,然后在下游分类任务上进行了几次微调。为此,我们扩展了一项有关通过迭代聚类和重新排列并扩大对比的优化损失来说明它的方法,扩展了一项有关采用对比度学习进行自我监督预训练的研究。据我们所知,我们在标准和跨域场景中进行的实验表明,我们在(5-way,1和5-shot)设置中设置了标准迷你im-ImageNet基准测试以及(5way,5和20速度)的(5-way,5和20-shot)设置的新最先进(SOTA)。我们的代码和实验可以在我们的GitHub存储库中找到:https://github.com/ojss/c3lr。
Unsupervised learning is argued to be the dark matter of human intelligence. To build in this direction, this paper focuses on unsupervised learning from an abundance of unlabeled data followed by few-shot fine-tuning on a downstream classification task. To this aim, we extend a recent study on adopting contrastive learning for self-supervised pre-training by incorporating class-level cognizance through iterative clustering and re-ranking and by expanding the contrastive optimization loss to account for it. To our knowledge, our experimentation both in standard and cross-domain scenarios demonstrate that we set a new state-of-the-art (SoTA) in (5-way, 1 and 5-shot) settings of standard mini-ImageNet benchmark as well as the (5-way, 5 and 20-shot) settings of cross-domain CDFSL benchmark. Our code and experimentation can be found in our GitHub repository: https://github.com/ojss/c3lr.