论文标题

gan的顶级训练:通过扔掉坏样品来提高gan性能

Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples

论文作者

Sinha, Samarth, Zhao, Zhengli, Goyal, Anirudh, Raffel, Colin, Odena, Augustus

论文摘要

我们向生成对抗网络(GAN)训练算法引入了简单的(一条代码)修改,该培训算法可在实质上改善结果而没有增加计算成本的结果:当更新发电机参数时,我们简单地将梯度的梯度贡献从批次批评的元素中的元素中归零为“最不逼真的”。通过对许多不同GAN变体的实验,我们表明此“ TOP-K Update”过程通常是适用的改进。为了了解改进的性质,我们对简单的高斯数据集进行了广泛的分析,并发现了几种有趣的现象。其中是,当使用最差的批处理元素计算梯度更新时,实际上可以将样品推到最近的模式。我们还将我们的方法应用于最近的GAN变体,并将有条件生成的最新FID从CIFAR-10上的9.21提高到8.57。

We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost: When updating the generator parameters, we simply zero out the gradient contributions from the elements of the batch that the critic scores as `least realistic'. Through experiments on many different GAN variants, we show that this `top-k update' procedure is a generally applicable improvement. In order to understand the nature of the improvement, we conduct extensive analysis on a simple mixture-of-Gaussians dataset and discover several interesting phenomena. Among these is that, when gradient updates are computed using the worst-scoring batch elements, samples can actually be pushed further away from their nearest mode. We also apply our method to recent GAN variants and improve state-of-the-art FID for conditional generation from 9.21 to 8.57 on CIFAR-10.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源