论文标题

GPT在TRADIFY 2022:及时辅助事实验证

GPTs at Factify 2022: Prompt Aided Fact-Verification

论文作者

Sahu, Pawan Kumar, Aggarwal, Saksham, Gupta, Taneesh, Das, Gyanendra

论文摘要

最紧迫的社会问题之一是与虚假新闻的斗争。虚假的主张很难暴露,会造成很多损害。为了解决这个问题,事实验证变得至关重要,因此是不同研究社区中感兴趣的话题。仅使用数据的文本形式,我们建议解决问题的解决方案,并通过其他方法实现竞争结果。我们基于两种方法(基于训练的语言模型)的方法和基于及时的方法提出解决方案。基于PLM的方法使用传统的监督学习,其中训练模型将“ X”作为输入和输出预测为P(Y | X)。而基于及时的学习反映了设计输入以适合模型的想法,以便可以将原始目标重新构成(掩盖)语言建模的问题。我们可能会通过使用额外的提示来微调PLM来进一步刺激PLM提供的丰富知识,以更好地完成下游任务。我们的实验表明,所提出的方法的性能不仅仅是微调PLM。我们在分类数据集中获得了0.6946的F1得分,并且在竞争负责人板上获得了第7位。

One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results with other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. The PLM-based approach uses the traditional supervised learning, where the model is trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modeling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源