论文标题
更好,更简单的学习声明在线缓存
Better and Simpler Learning-Augmented Online Caching
论文作者
论文摘要
Lykouris和Vassilvitskii(ICML 2018)引入了一个带有机器学习建议的在线缓存模型,每个页面请求还会提出关于该页面何时接下来请求的预测。在此模型中,一个自然的目标是设计算法,(1)当建议准确时,(2)在最坏的情况下,在LA传统竞争分析中保持良好。 Lykouris和Vassilvitskii通过将标记算法调整为学习效果的设置,从而给出了这样的算法。在最近的一项工作中,Rohatgi(2020年的苏打水)也通过随机标记的启发来改善其结果。我们继续研究这个问题,但采用了一种不同的方法:我们考虑将盲目算法结合起来,该算法刚刚遵循这些预测,并使用以黑盒方式进行在线缓存的最佳竞争算法。所得算法的表现优于所有现有方法,同时显着简单。此外,我们表明,将Blindoracle与LRU相结合实际上在此问题的确定性算法中是最佳的。
Lykouris and Vassilvitskii (ICML 2018) introduce a model of online caching with machine-learned advice, where each page request additionally comes with a prediction of when that page will next be requested. In this model, a natural goal is to design algorithms that (1) perform well when the advice is accurate and (2) remain robust in the worst case a la traditional competitive analysis. Lykouris and Vassilvitskii give such an algorithm by adapting the Marker algorithm to the learning-augmented setting. In a recent work, Rohatgi (SODA 2020) improves on their result with an approach also inspired by randomized marking. We continue the study of this problem, but with a somewhat different approach: We consider combining the BlindOracle algorithm, which just naïvely follows the predictions, with an optimal competitive algorithm for online caching in a black-box manner. The resulting algorithm outperforms all existing approaches while being significantly simpler. Moreover, we show that combining BlindOracle with LRU is in fact optimal among deterministic algorithms for this problem.