论文标题

使用纵向移动健康数据的个性化政策学习

Personalized Policy Learning using Longitudinal Mobile Health Data

论文作者

Hu, Xinyu, Qian, Min, Cheng, Bin, Cheung, Ying Kuen

论文摘要

我们使用纵向移动健康应用程序使用数据解决了个性化的政策学习问题。个性化政策代表了从制定单个政策的范式转变,该政策可以通过裁缝规定个性化的决策。具体而言,我们旨在基于估计广义线性混合模型下的随机效果的最佳策略,一个用户。有了许多随机效应,我们考虑了新的估计方法和受惩罚目标,以绕过边缘可能性近似的高维积分。我们通过内源性应用使用来建立方法的一致性和最佳性。我们应用我们的方法来开发294个应用程序用户中的个性化推送(“提示”)计划,其目标是最大化给定过去的应用程序用法和其他上下文因素的及时响应率。我们发现,鉴于用户之间相同的协变量,我们发现了最佳的推动计划,因此呼吁采取个性化政策。使用估计的个性化策略将在16周或更晚的这些用户中达到平均及时的响应率为23%:这是对观察到的率的显着提高(11%),而文献表明下载后3个月的用户参与度为3%-15%。所提出的方法与现有的估计方法相比,包括在模拟研究中使用R函数“ GLMER”。

We address the personalized policy learning problem using longitudinal mobile health application usage data. Personalized policy represents a paradigm shift from developing a single policy that may prescribe personalized decisions by tailoring. Specifically, we aim to develop the best policy, one per user, based on estimating random effects under generalized linear mixed model. With many random effects, we consider new estimation method and penalized objective to circumvent high-dimension integrals for marginal likelihood approximation. We establish consistency and optimality of our method with endogenous app usage. We apply our method to develop personalized push ("prompt") schedules in 294 app users, with a goal to maximize the prompt response rate given past app usage and other contextual factors. We found the best push schedule given the same covariates varied among the users, thus calling for personalized policies. Using the estimated personalized policies would have achieved a mean prompt response rate of 23% in these users at 16 weeks or later: this is a remarkable improvement on the observed rate (11%), while the literature suggests 3%-15% user engagement at 3 months after download. The proposed method compares favorably to existing estimation methods including using the R function "glmer" in a simulation study.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源