论文标题

激励组合匪徒探索

Incentivizing Combinatorial Bandit Exploration

论文作者

Hu, Xinyan, Ngo, Dung Daniel, Slivkins, Aleksandrs, Wu, Zhiwei Steven

论文摘要

考虑一种建议在推荐系统中向自私使用者推荐动作的强盗算法。用户可以自由选择其他操作,并且需要激励遵循该算法的建议。尽管用户更喜欢利用利用,但算法可以通过利用从以前的用户收集的信息来激励它们来探索。关于此问题的所有已发表的工作(称为激励探索)都集中在小型的,非结构化的动作集上,并且主要针对用户的信念在跨动作中独立时。但是,现实的探索问题通常具有较大的结构化动作集和高度相关的信念。我们专注于结构的范式探索问题:组合半伴侣。我们证明,当汤普森采样用于组合半伴侣时,当用足够数量的每个臂样品初始化时(在贝叶斯先验预先确定此数字)时,是激励兼容的。此外,我们设计了与激励兼容的算法来收集初始样品。

Consider a bandit algorithm that recommends actions to self-interested users in a recommendation system. The users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations. While the users prefer to exploit, the algorithm can incentivize them to explore by leveraging the information collected from the previous users. All published work on this problem, known as incentivized exploration, focuses on small, unstructured action sets and mainly targets the case when the users' beliefs are independent across actions. However, realistic exploration problems often feature large, structured action sets and highly correlated beliefs. We focus on a paradigmatic exploration problem with structure: combinatorial semi-bandits. We prove that Thompson Sampling, when applied to combinatorial semi-bandits, is incentive-compatible when initialized with a sufficient number of samples of each arm (where this number is determined in advance by the Bayesian prior). Moreover, we design incentive-compatible algorithms for collecting the initial samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源