论文标题
建立多模式,多任务学习的基于多任务学习的前培训框架,用于文档表示
Towards a Multi-modal, Multi-task Learning based Pre-training Framework for Document Representation Learning
论文作者
论文摘要
文献中的最新方法已利用文档(文本,布局,图像)中的多模式信息来服务特定的下游文档任务。但是,它们受到 - (i)无法学习文档的文本,布局和图像维度以及(ii)无法处理多页文档的跨模式表示的限制。已经在自然语言处理(NLP)域中显示了预训练技术,以从大型未标记数据集中学习通用文本表示,适用于各种下游NLP任务。在本文中,我们提出了一个基于多任务学习的框架,该框架利用了自我监管和监督的预培训任务的组合来学习适用于各种下游文档任务的通用文档表示。具体来说,我们将文档主题建模和文档减少预测介绍为新的预训练任务,以学习丰富的图像表示形式以及文档的文本和布局表示形式。我们利用Longformer网络体系结构作为骨干,以端到端的方式从多页文档中编码多模式信息。我们在各种不同的现实文档任务(例如文档分类,文档信息提取和文档检索)上展示了我们的培训前框架的适用性。我们在不同的标准文档数据集上评估了我们的框架,并进行详尽的实验,以将性能与我们框架和最先进基线的各种消融进行比较。
Recent approaches in literature have exploited the multi-modal information in documents (text, layout, image) to serve specific downstream document tasks. However, they are limited by their - (i) inability to learn cross-modal representations across text, layout and image dimensions for documents and (ii) inability to process multi-page documents. Pre-training techniques have been shown in Natural Language Processing (NLP) domain to learn generic textual representations from large unlabelled datasets, applicable to various downstream NLP tasks. In this paper, we propose a multi-task learning-based framework that utilizes a combination of self-supervised and supervised pre-training tasks to learn a generic document representation applicable to various downstream document tasks. Specifically, we introduce Document Topic Modelling and Document Shuffle Prediction as novel pre-training tasks to learn rich image representations along with the text and layout representations for documents. We utilize the Longformer network architecture as the backbone to encode the multi-modal information from multi-page documents in an end-to-end fashion. We showcase the applicability of our pre-training framework on a variety of different real-world document tasks such as document classification, document information extraction, and document retrieval. We evaluate our framework on different standard document datasets and conduct exhaustive experiments to compare performance against various ablations of our framework and state-of-the-art baselines.