论文标题

对数似然比最小化流量:朝着稳健和可量化的神经分布比对

Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment

论文作者

Usman, Ben, Sud, Avneesh, Dufour, Nick, Saenko, Kate

论文摘要

分布对齐在深度学习中有许多应用,包括域的适应性和无监督的图像对图像翻译。关于无监督分布对准的大多数先前工作都取决于最大程度地减少简单的非参数统计距离,例如最大平均差异或对抗性比对。但是,前者未能捕获复杂的现实世界分布的结构,而后者很难训练,并且没有提供任何通用收敛保证或自动定量验证程序。在本文中,我们提出了一种基于对数似然比统计量和标准化流量的新分布对准方法。我们表明,在某些假设下,这种组合产生了一个基于神经可能性的最小化目标,该目标在收敛时达到了已知的下限。我们通过实验验证,最大程度地减少了所得的目标导致域对齐,从而保留了输入域的局部结构。

Distribution alignment has many applications in deep learning, including domain adaptation and unsupervised image-to-image translation. Most prior work on unsupervised distribution alignment relies either on minimizing simple non-parametric statistical distances such as maximum mean discrepancy or on adversarial alignment. However, the former fails to capture the structure of complex real-world distributions, while the latter is difficult to train and does not provide any universal convergence guarantees or automatic quantitative validation procedures. In this paper, we propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows. We show that, under certain assumptions, this combination yields a deep neural likelihood-based minimization objective that attains a known lower bound upon convergence. We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源