论文标题
关于马尔可夫逻辑网络中的投影性
On Projectivity in Markov Logic Networks
论文作者
论文摘要
马尔可夫逻辑网络(MLNS)在不同域大小上定义了关系结构上的概率分布。许多作品已经注意到,与许多其他关系模型一样,MLN并未承认对各种域大小的一致的边际推断。此外,在某个域上学习的MLN不会推广到各种大小的新领域。在最近的作品中,域大小依赖性,解除推理和从子采样域学习之间存在联系。这些作品的核心思想是投资率的概念。投影模型赋予的概率分布使子结构的边际概率与域基数无关。因此,投影模型承认有效的边际推断,从而消除了对域大小的任何依赖。此外,投影模型有可能从子采样域中学习有效且一致的参数学习。在本文中,我们表征了两变量MLN的必要条件。然后,我们隔离了这类MLN中的特殊模型,即关系块模型(RBM)。我们表明,就数据可能性最大化而言,RBM是两变量片段中最好的投射MLN。最后,我们表明RBM还接受了子采样域对一致的参数学习。
Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Many works have noticed that MLNs, like many other relational models, do not admit consistent marginal inference over varying domain sizes. Furthermore, MLNs learnt on a certain domain do not generalize to new domains of varied sizes. In recent works, connections have emerged between domain size dependence, lifted inference and learning from sub-sampled domains. The central idea to these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference, removing any dependence on the domain size. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special model in this class of MLNs, namely Relational Block Model (RBM). We show that, in terms of data likelihood maximization, RBM is the best possible projective MLN in the two-variable fragment. Finally, we show that RBMs also admit consistent parameter learning over sub-sampled domains.