论文标题
部分可观测时空混沌系统的无模型预测
An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning
论文作者
论文摘要
半监督学习(SSL)在利用未标记的数据以提高模型性能方面表现出了巨大的希望。尽管标准SSL假定统一的数据分布,但我们考虑了一个更现实,更具挑战性的环境,称为不平衡的SSL,其中标记和未标记的数据都发生了不平衡的类别分布。尽管现有的努力可以应对这一挑战,但他们的绩效在面临严重失衡时会退化,因为它们无法充分有效地减少阶级失衡。在本文中,我们研究了一个简单而又被忽视的基线-SIMIS-根据最常见的类别的类别分布的差异,该基线通过简单地补充标记的数据来解决数据不平衡。如此简单的基线事实在减少阶级失衡方面非常有效。它比现有方法的差距高于现有方法,例如,在CIFAR100-LT,Food101-LT和Imagenet127上,比以前的SOTA比以前的SOTA相比12.8%,13.6%和16.7%。降低的失衡会导致更快的收敛性和更高的SIMIS伪标记精度。我们方法的简单性还使得可以与其他重平衡技术结合使用,以进一步提高性能。此外,我们的方法对广泛的数据分布显示出极大的鲁棒性,这在实践中具有巨大的潜力。代码将公开可用。
Semi-supervised learning (SSL) has shown great promise in leveraging unlabeled data to improve model performance. While standard SSL assumes uniform data distribution, we consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data. Although there are existing endeavors to tackle this challenge, their performance degenerates when facing severe imbalance since they can not reduce the class imbalance sufficiently and effectively. In this paper, we study a simple yet overlooked baseline -- SimiS -- which tackles data imbalance by simply supplementing labeled data with pseudo-labels, according to the difference in class distribution from the most frequent class. Such a simple baseline turns out to be highly effective in reducing class imbalance. It outperforms existing methods by a significant margin, e.g., 12.8%, 13.6%, and 16.7% over previous SOTA on CIFAR100-LT, FOOD101-LT, and ImageNet127 respectively. The reduced imbalance results in faster convergence and better pseudo-label accuracy of SimiS. The simplicity of our method also makes it possible to be combined with other re-balancing techniques to improve the performance further. Moreover, our method shows great robustness to a wide range of data distributions, which holds enormous potential in practice. Code will be publicly available.