论文标题

计算论证的公平和论证语言建模

Fair and Argumentative Language Modeling for Computational Argumentation

论文作者

Holtermann, Carolin, Lauscher, Anne, Ponzetto, Simone Paolo

论文摘要

尽管NLP的大量工作重点是测量和减轻语义空间中的刻板印象偏见,但针对计算论证偏见的研究仍处于起步阶段。在本文中,我们解决了这一研究差距,并对论证语言模型中的偏见进行了彻底的调查。为此,我们介绍了ABBA,这是一种专门针对论点量身定制的偏见测量的新型资源。我们利用基于轻量级适配器的方法来评估论证性微调和辩论对基于变压器的语言模型中内在偏见的影响,这种方法比完整的微调更可持续和参数效率。最后,我们分析了语言模型模型对参数质量预测中绩效的潜在影响,这是计算论证的下游任务。我们的结果表明,我们能够成功,可持续地消除一般和论证语言模型中的偏见,同时在下游任务中保留(有时改善)模型性能。我们在https://github.com/umanlp/fairargumentativelm上提供所有实验代码和数据。

Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce ABBA, a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning. Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation. Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving (and sometimes improving) model performance in downstream tasks. We make all experimental code and data available at https://github.com/umanlp/FairArgumentativeLM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源