论文标题

神经符号学习的语义概率层

Semantic Probabilistic Layers for Neuro-Symbolic Learning

论文作者

Ahmed, Kareem, Teso, Stefano, Chang, Kai-Wei, Broeck, Guy Van den, Vergari, Antonio

论文摘要

我们为结构化输出预测(SOP)设计了一个预测层,该预测可以插入任何神经网络,以确保其预测与一组预定义的符号约束一致。我们的语义概率层(SPL)可以在结构化的输出空间上建模复杂的相关性和硬性约束,同时可以通过最大可能性进行端到端学习。 SPL以干净的模块化方式将精确的概率推理与逻辑推理结合在一起,学习复杂的分布,并将其支持限制在约束解决方案上。因此,它们可以忠实,有效地模拟超出替代性神经符号方法的复杂SOP任务。我们从经验上证明,SPL在挑战性的SOP任务方面表现出了这些竞争对手的表现,包括分层多标签分类,探路和偏好学习,同时保持完美的约束满意度。

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints. Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space all while being amenable to end-to-end learning via maximum likelihood. SPLs combine exact probabilistic inference with logical reasoning in a clean and modular way, learning complex distributions and restricting their support to solutions of the constraint. As such, they can faithfully, and efficiently, model complex SOP tasks beyond the reach of alternative neuro-symbolic approaches. We empirically demonstrate that SPLs outperform these competitors in terms of accuracy on challenging SOP tasks including hierarchical multi-label classification, pathfinding and preference learning, while retaining perfect constraint satisfaction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源