论文标题

并行在线实验中的公平效应归因

Fair Effect Attribution in Parallel Online Experiments

论文作者

Buchholz, Alexander, Bellini, Vito, Di Benedetto, Giuseppe, Stein, Yannik, Ruffini, Matteo, Moerchen, Fabian

论文摘要

A/B测试的目的是可靠地确定在线服务中引入的变化的影响。在线平台通常通过在治疗和对照组中随机分配输入的用户流量来运行大量同时实验。尽管不同群体之间存在完美的随机分析,但同时实验可以相互作用,并对诸如参与度指标之类的平均人群结果产生负面影响。这些是在全球范围内测量并监控以保护整体用户体验的。因此,衡量这些相互作用效应并以公平的方式归因于各自实验者至关重要。我们建议一种方法,通过提供基于沙普利价值观的成本分担方法来衡量和解开同时实验的效果。我们还提供了反事实的观点,该观点可以根据利用因果推理技术的条件平均治疗效应来预测共享影响。我们说明了在现实世界和合成数据实验中的方法。

A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services. It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly in treatment and control groups. Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes such as engagement metrics. These are measured globally and monitored to protect overall user experience. Therefore, it is crucial to measure these interaction effects and attribute their overall impact in a fair way to the respective experimenters. We suggest an approach to measure and disentangle the effect of simultaneous experiments by providing a cost sharing approach based on Shapley values. We also provide a counterfactual perspective, that predicts shared impact based on conditional average treatment effects making use of causal inference techniques. We illustrate our approach in real world and synthetic data experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源