论文标题
SketchyyCoco:徒手场景草图的图像生成
SketchyCOCO: Image Generation from Freehand Scene Sketches
论文作者
论文摘要
我们从场景级别的徒手草图中介绍了第一种自动图像生成的方法。我们的模型可以通过徒手草图指定综合目标来产生可控的图像。关键贡献是一个名为Edgegan的属性矢量桥接生成对抗网络,该网络支持高视觉质量质量的对象级图像内容生成而无需使用徒手草图作为训练数据。我们已经构建了一个称为SketchyCoco的大规模复合数据集,以支持和评估解决方案。我们在SketchyCoco上验证了对象级和场景级图像生成的任务的方法。通过定量,定性的结果,人类评估和消融研究,我们证明了该方法从各种徒手草图中生成现实的复杂场景级图像的能力。
We introduce the first method for automatic image generation from scene-level freehand sketches. Our model allows for controllable image generation by specifying the synthesis goal via freehand sketches. The key contribution is an attribute vector bridged Generative Adversarial Network called EdgeGAN, which supports high visual-quality object-level image content generation without using freehand sketches as training data. We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution. We validate our approach on the tasks of both object-level and scene-level image generation on SketchyCOCO. Through quantitative, qualitative results, human evaluation and ablation studies, we demonstrate the method's capacity to generate realistic complex scene-level images from various freehand sketches.