论文标题

Langevin Monte Carlo用于上下文土匪

Langevin Monte Carlo for Contextual Bandits

论文作者

Xu, Pan, Zheng, Hongkai, Mazumdar, Eric, Azizzadenesheli, Kamyar, Anandkumar, Anima

论文摘要

我们研究汤普森采样对上下文匪徒的效率。现有的基于汤普森采样的算法需要构建后验分布的拉普拉斯近似(即高斯分布),这是在高维应用中的一般协方差矩阵中效率低下的效率。此外,高斯近似可能不是对一般奖励产生功能的后验分布的良好替代物。我们提出了一种有效的后验采样算法,即Langevin Monte Carlo Thompson采样(LMC-TS),该采样(LMC-TS)使用Markov Chain Monte Carlo(MCMC)方法直接从上下文斑块中的后验分布中直接采样。我们的方法在计算上是有效的,因为它只需要执行嘈杂的梯度下降更新而不构建后验分布的拉普拉斯近似。我们证明,所提出的算法实现了与汤普森(Thompson)采样算法相同的sublinear遗憾,用于上下文匪徒的特殊情况,即线性上下文匪徒。我们在不同上下文匪徒模型上对合成数据和现实世界数据集进行了实验,这表明直接从后验进行采样既具有计算上有效且具有竞争力的性能。

We study the efficiency of Thompson sampling for contextual bandits. Existing Thompson sampling-based algorithms need to construct a Laplace approximation (i.e., a Gaussian distribution) of the posterior distribution, which is inefficient to sample in high dimensional applications for general covariance matrices. Moreover, the Gaussian approximation may not be a good surrogate for the posterior distribution for general reward generating functions. We propose an efficient posterior sampling algorithm, viz., Langevin Monte Carlo Thompson Sampling (LMC-TS), that uses Markov Chain Monte Carlo (MCMC) methods to directly sample from the posterior distribution in contextual bandits. Our method is computationally efficient since it only needs to perform noisy gradient descent updates without constructing the Laplace approximation of the posterior distribution. We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits, viz., linear contextual bandits. We conduct experiments on both synthetic data and real-world datasets on different contextual bandit models, which demonstrates that directly sampling from the posterior is both computationally efficient and competitive in performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源