论文标题
通过有条件委托的人类合作:内容审核的案例研究
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
论文作者
论文摘要
尽管在许多基准数据集中表现出色,但AI模型仍然会犯错误,尤其是在分布示例中。这仍然是一个悬而未决的问题,如何与人类合作有效地使用这种不完美的模型。先前的工作集中在AI援助上,该援助有助于人们做出个人高风险的决定,这对于大量相对较低的赌注决策(例如,调节社交媒体评论)是不可扩展的。取而代之的是,我们提出条件委派作为人类协作的替代范式,人类创建规则以指示模型的值得信赖地区。我们使用内容适中作为测试台,开发了新的接口,以帮助人类创建有条件的委托规则,并使用两个数据集进行随机实验,以模拟分发和分发场景。我们的研究证明了有条件授权在改善模型性能方面的希望,并为这种新型范式的设计提供了见解,包括AI解释的效果。
Despite impressive performance in many benchmark datasets, AI models can still make mistakes, especially among out-of-distribution examples. It remains an open question how such imperfect models can be used effectively in collaboration with humans. Prior work has focused on AI assistance that helps people make individual high-stakes decisions, which is not scalable for a large amount of relatively low-stakes decisions, e.g., moderating social media comments. Instead, we propose conditional delegation as an alternative paradigm for human-AI collaboration where humans create rules to indicate trustworthy regions of a model. Using content moderation as a testbed, we develop novel interfaces to assist humans in creating conditional delegation rules and conduct a randomized experiment with two datasets to simulate in-distribution and out-of-distribution scenarios. Our study demonstrates the promise of conditional delegation in improving model performance and provides insights into design for this novel paradigm, including the effect of AI explanations.