论文标题
您仅得出一次(yodo):自动分化以进行贝叶斯网络中有效灵敏度分析
You Only Derive Once (YODO): Automatic Differentiation for Efficient Sensitivity Analysis in Bayesian Networks
论文作者
论文摘要
敏感性分析衡量贝叶斯网络参数对网络定义的一定兴趣的影响,例如可变占特定值的变量的概率。特别是,所谓的灵敏度价值衡量了相对于网络的条件概率的利益部分衍生物的数量。但是,在具有数千个参数的大型网络中找到此类值可能会变得非常昂贵。我们建议使用与精确推断的自动分化,以在单个通过中获得所有灵敏度值。我们的方法首先使用E.G.可变消除,然后将此操作倒退以获得相对于所有输入参数的梯度。我们通过在贝叶斯网络建模人道主义危机和灾难的贝叶斯网络上对所有参数进行排名,从而证明了我们的例程,然后通过将其扩展到具有多达1000个参数的巨大网络来显示该方法的效率。使用流行的机器学习库Pytorch的方法实现。
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. In particular, the so-called sensitivity value measures the quantity of interest's partial derivative with respect to the network's conditional probabilities. However, finding such values in large networks with thousands of parameters can become computationally very expensive. We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass. Our method first marginalizes the whole network once using e.g. variable elimination and then backpropagates this operation to obtain the gradient with respect to all input parameters. We demonstrate our routines by ranking all parameters by importance on a Bayesian network modeling humanitarian crises and disasters, and then show the method's efficiency by scaling it to huge networks with up to 100'000 parameters. An implementation of the methods using the popular machine learning library PyTorch is freely available.