论文标题

部分可观测时空混沌系统的无模型预测

Rethinking Super-Resolution as Text-Guided Details Generation

论文作者

Ma, Chenxi, Yan, Bo, Lin, Qing, Tan, Weimin, Chen, Siming

论文摘要

深度神经网络大大促进了单图超分辨率(SISR)的性能。传统方法仍然仅基于图像模态的输入来恢复单个高分辨率(HR)解决方案。但是,图像级信息不足以预测大型展望因素面临的足够细节和光真逼真的视觉质量(x8,x16)。在本文中,我们提出了一种新的视角,将SISR视为语义图像详细信息增强问题,以产生忠于地面真理的语义合理的HR图像。为了提高重建图像的语义精度和视觉质量,我们通过提出文本引导的超分辨率(TGSR)框架来探索SISR中的多模式融合学习,该框架可以从文本和图像模态中有效地利用信息。与现有方法不同,提出的TGSR可以生成通过粗到精细过程匹配文本描述的HR图像详细信息。广泛的实验和消融研究证明了TGSR的效果,该效果利用文本参考来恢复逼真的图像。

Deep neural networks have greatly promoted the performance of single image super-resolution (SISR). Conventional methods still resort to restoring the single high-resolution (HR) solution only based on the input of image modality. However, the image-level information is insufficient to predict adequate details and photo-realistic visual quality facing large upscaling factors (x8, x16). In this paper, we propose a new perspective that regards the SISR as a semantic image detail enhancement problem to generate semantically reasonable HR image that are faithful to the ground truth. To enhance the semantic accuracy and the visual quality of the reconstructed image, we explore the multi-modal fusion learning in SISR by proposing a Text-Guided Super-Resolution (TGSR) framework, which can effectively utilize the information from the text and image modalities. Different from existing methods, the proposed TGSR could generate HR image details that match the text descriptions through a coarse-to-fine process. Extensive experiments and ablation studies demonstrate the effect of the TGSR, which exploits the text reference to recover realistic images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源