论文标题
深CG2REAL:通过图像删除的合成到真实翻译
Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement
论文作者
论文摘要
我们提出了一种改善低质量,合成图像的视觉现实主义的方法,例如OpenGL渲染。训练图像空间中未配对的合成到现实的翻译网络严重受到约束,并产生可见的伪影。取而代之的是,我们提出了一种半监督的方法,该方法在图像的分离阴影和反照层层上运行。我们的两阶段管道首先学会了使用基于物理的渲染作为目标以监督方式预测准确的阴影,并通过改进的Cyclegan网络进一步增加了纹理和阴影的现实感。对SUNCG室内场景数据集进行了广泛的评估表明,与其他最新方法相比,我们的方法产生的图像更现实。此外,经过我们生成的“真实”图像训练的网络比域适应方法预测更准确的深度和正:正常方法,这表明改善图像的视觉现实主义比施加特定于任务的损失更有效。
We present a method to improve the visual realism of low-quality, synthetic images, e.g. OpenGL renderings. Training an unpaired synthetic-to-real translation network in image space is severely under-constrained and produces visible artifacts. Instead, we propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image. Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets, and further increases the realism of the textures and shading with an improved CycleGAN network. Extensive evaluations on the SUNCG indoor scene dataset demonstrate that our approach yields more realistic images compared to other state-of-the-art approaches. Furthermore, networks trained on our generated "real" images predict more accurate depth and normals than domain adaptation approaches, suggesting that improving the visual realism of the images can be more effective than imposing task-specific losses.