论文标题

用于文本发现的文本视觉语义数据集

Textual Visual Semantic Dataset for Text Spotting

论文作者

Sabir, Ahmed, Moreno-Noguer, Francesc, Padró, Lluís

论文摘要

野外的文本斑点包括检测和识别图像中出现的文本(例如,招牌,交通信号或服装或物体中的品牌)。由于上下文出现文本的复杂性(背景,阴影,遮挡,透视扭曲等),这是一个具有挑战性的问题。只有几种方法试图利用文本及其周围环境之间的关系,以更好地识别场景中的文本。在本文中,我们提出了一个视觉上下文数据集,用于在野外进行文本斑点,该数据集可公开可用的可可文本[Veit等。 [2016]]已扩展到有关场景的信息(例如图像中出现的对象和位置),以使研究人员能够在其文本斑点系统中包括文本和场景之间的语义关系,并为这种方法提供共同的框架。对于图像中的每个文本,我们提取三种上下文信息:场景中的对象,图像位置标签和文本图像描述(标题)。我们使用最先进的现成的可用工具来提取此其他信息。由于此信息具有文本形式,因此可以将文本相似性或语义关系方法用于文本斑点系统,无论是作为后处理还是在端到端培训策略中。我们的数据可在https://git.io/jeztb上公开获取。

Text Spotting in the wild consists of detecting and recognizing text appearing in images (e.g. signboards, traffic signals or brands in clothing or objects). This is a challenging problem due to the complexity of the context where texts appear (uneven backgrounds, shading, occlusions, perspective distortions, etc.). Only a few approaches try to exploit the relation between text and its surrounding environment to better recognize text in the scene. In this paper, we propose a visual context dataset for Text Spotting in the wild, where the publicly available dataset COCO-text [Veit et al. 2016] has been extended with information about the scene (such as objects and places appearing in the image) to enable researchers to include semantic relations between texts and scene in their Text Spotting systems, and to offer a common framework for such approaches. For each text in an image, we extract three kinds of context information: objects in the scene, image location label and a textual image description (caption). We use state-of-the-art out-of-the-box available tools to extract this additional information. Since this information has textual form, it can be used to leverage text similarity or semantic relation methods into Text Spotting systems, either as a post-processing or in an end-to-end training strategy. Our data is publicly available at https://git.io/JeZTb.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源