论文标题
在Twitter上众包事实检查:人群与专家相比如何?
Crowdsourced Fact-Checking at Twitter: How Does the Crowd Compare With Experts?
论文作者
论文摘要
事实核对是打击在线错误信息方面的有效解决方案之一。但是,传统的事实检查是一个需要稀缺专家人力资源的过程,因此在社交媒体上并不能很好地扩展新内容,因此在社交媒体上进行了良好的扩展。已经提出了基于众包的方法来应对这一挑战,因为它们可以以较小的成本进行扩展,但是尽管它们证明是可行的,但一直在受控环境中进行研究。在这项工作中,我们研究了在Birdwatch计划中启动的Twitter启动的众包事实检查的首次大规模努力。我们的分析表明,在某些情况下,众包可能是一种有效的事实检查策略,甚至与人类专家获得的结果相当,但不会导致其他人的一致,可行的结果。我们处理了BirdWatch计划验证的11.9k推文,并报告了i)i)人群和专家如何选择要进行事实检查的内容的差异,ii)ii)人群和专家如何将不同的资源检索到事实检查以及iii)与专家检查中的事实检查的可伸缩性和效率相比,与专家相比。
Fact-checking is one of the effective solutions in fighting online misinformation. However, traditional fact-checking is a process requiring scarce expert human resources, and thus does not scale well on social media because of the continuous flow of new content to be checked. Methods based on crowdsourcing have been proposed to tackle this challenge, as they can scale with a smaller cost, but, while they have shown to be feasible, have always been studied in controlled environments. In this work, we study the first large-scale effort of crowdsourced fact-checking deployed in practice, started by Twitter with the Birdwatch program. Our analysis shows that crowdsourcing may be an effective fact-checking strategy in some settings, even comparable to results obtained by human experts, but does not lead to consistent, actionable results in others. We processed 11.9k tweets verified by the Birdwatch program and report empirical evidence of i) differences in how the crowd and experts select content to be fact-checked, ii) how the crowd and the experts retrieve different resources to fact-check, and iii) the edge the crowd shows in fact-checking scalability and efficiency as compared to expert checkers.