论文标题

GRAMER:多目标影响最大化的图形元加固学习

GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence Maximization

论文作者

Munikoti, Sai, Natarajan, Balasubramaniam, Halappanavar, Mahantesh

论文摘要

影响最大化(IM)是一个组合问题,即识别网络中称为种子节点的节点的子集(图),该节点在激活后,为给定的扩散模型提供了在网络中的最大影响,以及针对种子集大小的预算。 IM有许多应用程序,例如病毒营销,流行病控制,传感器放置和其他与网络相关的任务。但是,由于当前算法的计算复杂性,其用途受到限制。最近,探索了IM的学习启发式方法,以减轻计算负担。但是,当前方法存在严重的局限性:(1)IM公式仅通过传播考虑影响而忽略自我激活; (2)对大图的可伸缩性; (3)跨图家族的概括性; (4)低计算效率的运行时间很长,可以识别每个测试网络的种子集。在这项工作中,我们通过涉及(1)将通用IM问题作为Markov决策过程的独特方法来解决这些局限性,该方法既可以处理内在和影响激活; (2)使用双Q学习来估计种子节点; (3)通过基于子图的表示确保可伸缩性; (4)通过跨图系列的元学习融合性。在各种标准网络中进行了广泛的实验,以验证所提出的图形元加强学习(GRAMER)框架的性能。结果表明,与常规方法相比,GRAMER是多个订单的速度和通用速度。

Influence maximization (IM) is a combinatorial problem of identifying a subset of nodes called the seed nodes in a network (graph), which when activated, provide a maximal spread of influence in the network for a given diffusion model and a budget for seed set size. IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks. However, the uses are limited due to the computational complexity of current algorithms. Recently, learning heuristics for IM have been explored to ease the computational burden. However, there are serious limitations in current approaches such as: (1) IM formulations only consider influence via spread and ignore self activation; (2) scalability to large graphs; (3) generalizability across graph families; (4) low computational efficiency with a large running time to identify seed sets for every test network. In this work, we address each of these limitations through a unique approach that involves (1) formulating a generic IM problem as a Markov decision process that handles both intrinsic and influence activations; (2) employing double Q learning to estimate seed nodes; (3) ensuring scalability via sub-graph based representations; and (4) incorporating generalizability via meta-learning across graph families. Extensive experiments are carried out in various standard networks to validate performance of the proposed Graph Meta Reinforcement learning (GraMeR) framework. The results indicate that GraMeR is multiple orders faster and generic than conventional approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源