论文标题

通过随机纳什游戏培训生成的对抗网络

Training Generative Adversarial Networks via stochastic Nash games

论文作者

Franci, Barbara, Grammatico, Sergio

论文摘要

生成对抗网络(GAN)是具有两个拮抗神经网络的一类生成模型:一个生成器和一个歧视器。这两个神经网络通过对抗过程相互竞争,可以将其建模为随机NASH平衡问题。由于相关的训练过程具有挑战性,因此设计可靠的算法以计算平衡是至关重要的。在本文中,我们提出了一种随机放松的前向后反复(SRFB)算法,用于GAN,当可用的数据越来越多时,我们会显示与精确解决方案的收敛性。我们还显示,当仅几个样品可用时,SRFB算法的平均变体与解决方案的邻域的收敛性。在这两种情况下,当游戏的伪映射映射是单调时,都可以保证收敛。该假设是文献中最弱的假设之一。此外,我们将算法应用于图像生成问题。

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator. These two neural networks compete against each other through an adversarial process that can be modeled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this paper, we propose a stochastic relaxed forward-backward (SRFB) algorithm for GANs and we show convergence to an exact solution when an increasing number of data is available. We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only few samples are available. In both cases, convergence is guaranteed when the pseudogradient mapping of the game is monotone. This assumption is among the weakest known in the literature. Moreover, we apply our algorithm to the image generation problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源