论文标题
评估移动平台上网络重新征收预测模型的可行性
Assessing the Feasibility of Web-Request Prediction Models on Mobile Platforms
论文作者
论文摘要
预取网页是一个经过良好研究的解决方案,可通过根据用户的过去行为来预测用户的未来操作来减少网络潜伏期。但是,这种技术在移动平台上基本上没有探索。当今的隐私法规使探索以长期积累大量数据并构建常规的“大型”预测模型的通常策略进行预摘要是不可行的。我们的工作是基于这样的观察结果:鉴于先前报道的移动设备使用趋势(例如,短暂爆发中的重复行为),我们假设预摘要应与对在更短时间内收集的移动用户请求进行培训的“小型”模型有效地工作。为了检验这一假设,我们构建了一个自动评估预测模型的框架,并根据在24小时内从近11,500个移动用户收集的超过1500万个HTTP请求进行了广泛的经验研究,导致超过700万个模型。我们的结果表明,在移动平台上使用小型模型进行预取的可行性,从而直接激发了该领域的未来工作。我们进一步介绍了几种改进预测模型的策略,同时降低了模型大小。最后,我们的框架为未来在一系列用法方案中进行有效预测模型的探索奠定了基础。
Prefetching web pages is a well-studied solution to reduce network latency by predicting users' future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today's privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, "large" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with "small" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.