论文标题
塔库伯:预先计算数据立方体,用于通过表格数据回答数值问题
TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
论文作者
论文摘要
现有的自动回归预培训的语言模型(PLM)(如T5和Bart)分别通过Unifiedskg和TapeX很好地应用于表问题,并在多个基准测试中展示了最先进的结果。但是,由于易于发出错误的隐式计算,最近出现的数值推理数据集(例如TAT-QA)挑战了自动回归PLM。在本文中,我们介绍了tacube,以预先计算表的聚合/算术结果,以便提前对PLMs进行方便且很容易就可以回答数值推理问题。 Tacube系统,全面地涵盖了表段上的一系列计算操作。通过简单地将Tacube串联到PLM的输入序列,它显示出显着的实验效率。 Tacube在TAT-QA上将F1分数从49.6%提高到66.2%,并在WikITQ上取得了新的最新结果(59.6%的表示准确性)。 Tacube在数值推理案例上的改进更加值得注意:在Tat-QA上,Tacube促进了Bart-Large的确切匹配准确性39.6%,平均为52.5%,基础收集为36.6%,分区为22.2%。我们认为,塔库伯是一种一般且便携式的预付款解决方案,可以将其潜在地集成到各种数值推理框架上
Existing auto-regressive pre-trained language models (PLMs) like T5 and BART, have been well applied to table question answering by UNIFIEDSKG and TAPEX, respectively, and demonstrated state-of-the-art results on multiple benchmarks. However, auto-regressive PLMs are challenged by recent emerging numerical reasoning datasets, such as TAT-QA, due to the error-prone implicit calculation. In this paper, we present TaCube, to pre-compute aggregation/arithmetic results for the table in advance, so that they are handy and readily available for PLMs to answer numerical reasoning questions. TaCube systematically and comprehensively covers a collection of computational operations over table segments. By simply concatenating TaCube to the input sequence of PLMs, it shows significant experimental effectiveness. TaCube promotes the F1 score from 49.6% to 66.2% on TAT-QA and achieves new state-of-the-art results on WikiTQ (59.6% denotation accuracy). TaCube's improvements on numerical reasoning cases are even more notable: on TAT-QA, TaCube promotes the exact match accuracy of BART-large by 39.6% on sum, 52.5% on average, 36.6% on substraction, and 22.2% on division. We believe that TaCube is a general and portable pre-computation solution that can be potentially integrated to various numerical reasoning frameworks