论文标题

机器人本地化的文本检测和识别

Text Detection & Recognition in the Wild for Robot Localization

论文作者

Raisi, Zobeir, Zelek, John

论文摘要

标牌无处不在,机器人应该能够利用标志来帮助其本地化(包括Visual Place识别(VPR))和MAP。由于姿势,不规则文本,照明和遮挡等因素,野外的强大文本检测和识别是具有挑战性的。我们提出了一个端到端场景文本斑点模型,该模型同时输出文本字符串和边界框。该模型更适合VPR。我们的核心贡献是利用端到端的场景文本斑点框架引入了充分捕获不同挑战的地方的不规则和遮挡的文本区域。为了评估我们提出的架构对VPR的性能,我们对具有挑战性的自我收集的文本位置(SCTP)基准数据集进行了几项实验。最初的实验结果表明,在此基准测试时,提出的方法以精度优于SOTA方法和回忆。

Signage is everywhere and a robot should be able to take advantage of signs to help it localize (including Visual Place Recognition (VPR)) and map. Robust text detection & recognition in the wild is challenging due to such factors as pose, irregular text, illumination, and occlusion. We propose an end-to-end scene text spotting model that simultaneously outputs the text string and bounding boxes. This model is more suitable for VPR. Our central contribution is introducing utilizing an end-to-end scene text spotting framework to adequately capture the irregular and occluded text regions in different challenging places. To evaluate our proposed architecture's performance for VPR, we conducted several experiments on the challenging Self-Collected Text Place (SCTP) benchmark dataset. The initial experimental results show that the proposed method outperforms the SOTA methods in terms of precision and recall when tested on this benchmark.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源