论文标题

约束指导梯度下降:带有不平等约束的指导培训

Constraint Guided Gradient Descent: Guided Training with Inequality Constraints

论文作者

Van Baelen, Quinten, Karsmakers, Peter

论文摘要

深度学习通常是通过以输入输出对的形式从数据中学习的神经网络来执行的,忽略了可用的域知识。在这项工作中,提出了约束指导的梯度下降(CGGD)框架,以使域知识注入训练程序。假定领域知识被描述为硬性不平等约束的连词,这似乎是多种应用的自然选择。与其他神经符号方法相比,所提出的方法将满足训练数据的任何不平等约束的模型收敛,并且不需要首先将约束转换为添加到学习(优化)目标的某些临时项。在某些条件下,证明CGGD可以收敛到满足训练集上约束的模型,而先前的工作不一定会收敛到这种模型。从经验上显示了CGGD的两个独立和小型数据集,使培训较少依赖于网络的初始化,并提高了所有数据的约束性。

Deep learning is typically performed by learning a neural network solely from data in the form of input-output pairs ignoring available domain knowledge. In this work, the Constraint Guided Gradient Descent (CGGD) framework is proposed that enables the injection of domain knowledge into the training procedure. The domain knowledge is assumed to be described as a conjunction of hard inequality constraints which appears to be a natural choice for several applications. Compared to other neuro-symbolic approaches, the proposed method converges to a model that satisfies any inequality constraint on the training data and does not require to first transform the constraints into some ad-hoc term that is added to the learning (optimisation) objective. Under certain conditions, it is shown that CGGD can converges to a model that satisfies the constraints on the training set, while prior work does not necessarily converge to such a model. It is empirically shown on two independent and small data sets that CGGD makes training less dependent on the initialisation of the network and improves the constraint satisfiability on all data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源