论文标题
教一个不学习的东西
Teaching a GAN What Not to Learn
论文作者
论文摘要
最初将生成对抗网络(GAN)设想为无监督的生成模型,这些模型学会遵循目标分布。通过提供标记的数据并使用多级歧视者,诸如条件gan,辅助分类器gans(ACGANS)诸如监督和半监督学习框架等变体。在本文中,我们从不同的角度解决了受到监督的甘恩问题,这是由著名的波斯诗人鲁米哲学的动机,他说:“知道要忽略什么的艺术。”在GAN框架中,我们不仅提供了必须学会建模的GAN阳性数据,而且还提供了必须学会避免的所谓负面样本,我们称之为“ Rumi框架”。这种公式使歧视者可以通过学习惩罚不希望的样本来更好地代表潜在的目标分布 - 我们表明,这种能力可以加速发电机的学习过程。我们在Rumi环境中对标准GAN(SGAN)和最小二乘Gan(LSGAN)进行了重新制定。通过对MNIST,时尚MNIST,CELEBA和CIFAR-10数据集进行的实验证明了重新制定的优势。最后,我们考虑提出的公式的应用,以解决学习不平衡数据集中代表性不足类的重要问题。 Rumi方法的FID得分大大低于标准GAN框架,同时具有更好的概括能力。
Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators. In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, "The art of knowing is knowing what to ignore." In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid - we call this "The Rumi Framework." This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable - we show that this capability accelerates the learning process of the generator. We present a reformulation of the standard GAN (SGAN) and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an application of the proposed formulation to address the important problem of learning an under-represented class in an unbalanced dataset. The Rumi approach results in substantially lower FID scores than the standard GAN frameworks while possessing better generalization capability.