论文标题
基于集群的社会强化学习
Cluster-Based Social Reinforcement Learning
论文作者
论文摘要
社会强化学习方法是在大型网络中建模代理的,可用于伪造新闻,个性化的教学/医疗保健和病毒营销,但是由于网络大小和稀疏交互数据,将互合依赖性纳入模型是一个挑战。以前的社会RL方法要么忽略代理依赖性,要么以计算密集的方式对其进行建模。在这项工作中,我们通过聚类用户(基于他们对目标的回报和贡献)将代理依赖性有效地纳入紧凑的模型中,并将其与一种方法结合起来,以轻松从集群级别的策略中得出个性化的代理级别策略。我们还提出了一种动态聚类方法,以捕获不断变化的用户行为。与不使用代理相关性或仅使用静态簇的几个基线相比,现实世界数据集的实验表明,我们提出的方法更快地学习了更准确的策略估计和收敛。
Social Reinforcement Learning methods, which model agents in large networks, are useful for fake news mitigation, personalized teaching/healthcare, and viral marketing, but it is challenging to incorporate inter-agent dependencies into the models effectively due to network size and sparse interaction data. Previous social RL approaches either ignore agents dependencies or model them in a computationally intensive manner. In this work, we incorporate agent dependencies efficiently in a compact model by clustering users (based on their payoff and contribution to the goal) and combine this with a method to easily derive personalized agent-level policies from cluster-level policies. We also propose a dynamic clustering approach that captures changing user behavior. Experiments on real-world datasets illustrate that our proposed approach learns more accurate policy estimates and converges more quickly, compared to several baselines that do not use agent correlations or only use static clusters.