论文标题

MU-GAN:基于多意见机制的面部属性编辑

MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism

论文作者

Zhang, Ke, Su, Yukun, Guo, Xiwang, Qi, Liang, Zhao, Zhenbing

论文摘要

面部属性编辑主要有两个目标:1)将图像从源域转换为目标,而2)仅更改与目标属性相关的面部区域,并保留属性 - 排除细节。在这项工作中,我们提出了一个基于U-NET的多发育生成对抗网络(MU-GAN)。首先,我们用发电机中的类似于U-NET的结构代替了经典的卷积编码器,然后应用添加注意机制来构建基于注意力的U-NET连接,以适应转移编码器表示,以补充解码器,并具有属性分类的细节细节和增强属性属性属性。其次,将一个自我注意力的机制纳入卷积层中,以建模图像区域之间的长距离和多层依赖性。实验结果表明,我们的方法能够平衡属性编辑能力和细节保存能力,并且可以使属性之间的相关性解脱。就属性操纵精度和图像质量而言,它的表现优于最先进的方法。

Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源