论文标题

基于扩散的图像翻译使用删除样式和内容表示形式

Diffusion-based Image Translation using Disentangled Style and Content Representation

论文作者

Kwon, Gihyun, Ye, Jong Chul

论文摘要

以语义文本或单个目标图像为指导的基于扩散的图像翻译已实现了灵活的样式传输,该样式不限于特定域。不幸的是,由于扩散模型的随机性,通常在反向扩散期间很难维持图像的原始内容。为了解决这个问题,我们在这里提出了一种新颖的基于扩散的图像翻译方法,该方法使用了分离的样式和内容表示形式。 具体而言,受剪接视觉变压器的启发,我们从VIT模型中提取多头自我注意力层的中间键,并将其用作内容保存损失。然后,通过将[CLS]分类令牌与来自DeNo的样品和目标图像匹配的图像引导样式传输执行,而将其他剪辑损失用于文本驱动样式传输。为了进一步加速反向扩散期间的语义变化,我们还提出了一种新颖的语义分歧丧失和重新采样策略。我们的实验结果表明,在文本引导和图像引导的翻译任务中,所提出的方法优于最先进的基线模型。

Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains. Unfortunately, due to the stochastic nature of diffusion models, it is often difficult to maintain the original content of the image during the reverse diffusion. To address this, here we present a novel diffusion-based unsupervised image translation method using disentangled style and content representation. Specifically, inspired by the splicing Vision Transformer, we extract intermediate keys of multihead self attention layer from ViT model and used them as the content preservation loss. Then, an image guided style transfer is performed by matching the [CLS] classification token from the denoised samples and target image, whereas additional CLIP loss is used for the text-driven style transfer. To further accelerate the semantic change during the reverse diffusion, we also propose a novel semantic divergence loss and resampling strategy. Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源