论文标题
注意您阅读的内容:非透明手写文本线识别
Pay Attention to What You Read: Non-recurrent Handwritten Text-Line Recognition
论文作者
论文摘要
反复的神经网络的出现,用于笔迹识别标志着一个重要的里程碑,尽管我们在不同的写作风格中观察到了很大的可变性,但具有令人印象深刻的识别精度。顺序体系结构非常适合模型文本行,这不仅是因为文本的固有方面,而且还可以在字符和单词的序列上学习概率分布。但是,使用这种复发范式在训练阶段的成本是有代价的,因为它们的顺序管道阻止了并行化。在这项工作中,我们引入了一种非循环方法,通过使用变压器模型来识别手写文本。我们提出了一种绕过任何复发的新方法。通过在视觉和文本阶段使用多头的自我发项层,我们能够解决角色识别以及学习要解码的角色序列的与语言相关的依赖关系。我们的模型不受任何预定义词汇的约束,能够识别出频率的单词,即训练词汇中未出现的单词。我们在先前的艺术方面大大提高了,并证明即使在几次学习方案中也能产生令人满意的识别精度。
The advent of recurrent neural networks for handwriting recognition marked an important milestone reaching impressive recognition accuracies despite the great variability that we observe across different writing styles. Sequential architectures are a perfect fit to model text lines, not only because of the inherent temporal aspect of text, but also to learn probability distributions over sequences of characters and words. However, using such recurrent paradigms comes at a cost at training stage, since their sequential pipelines prevent parallelization. In this work, we introduce a non-recurrent approach to recognize handwritten text by the use of transformer models. We propose a novel method that bypasses any recurrence. By using multi-head self-attention layers both at the visual and textual stages, we are able to tackle character recognition as well as to learn language-related dependencies of the character sequences to be decoded. Our model is unconstrained to any predefined vocabulary, being able to recognize out-of-vocabulary words, i.e. words that do not appear in the training vocabulary. We significantly advance over prior art and demonstrate that satisfactory recognition accuracies are yielded even in few-shot learning scenarios.