论文标题
图形模型的无限镜测试
The Infinity Mirror Test for Graph Models
论文作者
论文摘要
图形模型与其他机器学习模型一样,具有内置的隐式和明确的偏见,这通常以非平凡的方式影响性能。通常,使用任何数字或图形属性组合将新生成的图与源图进行比较来衡量模型的忠诚度。因此,生成图的大小或拓扑的差异表明模型中的损失。然而,在许多系统中,损失功能中编码的错误是微妙的,尚未得到很好的理解。在目前的工作中,我们介绍了无限镜测试,以分析图形模型的鲁棒性。这种直接的压力测试通过反复将模型拟合到其自身输出来起作用。假设完美的图形模型不会与源图有偏差。但是,该模型的隐式偏见和假设被Infinity Mirror测试夸大了,从而暴露了以前被掩盖的潜在问题。通过对成千上万的合成和现实图表实验的分析,我们表明,几种常规的图形模型以令人兴奋且有益的方式退化。我们认为,观察到的退化模式是更好图形模型未来发展的线索。
Graph models, like other machine learning models, have implicit and explicit biases built-in, which often impact performance in nontrivial ways. The model's faithfulness is often measured by comparing the newly generated graph against the source graph using any number or combination of graph properties. Differences in the size or topology of the generated graph, therefore, indicate a loss in the model. Yet, in many systems, errors encoded in loss functions are subtle and not well understood. In the present work, we introduce the Infinity Mirror test for analyzing the robustness of graph models. This straightforward stress test works by repeatedly fitting a model to its own outputs. A hypothetically perfect graph model would have no deviation from the source graph; however, the model's implicit biases and assumptions are exaggerated by the Infinity Mirror test, exposing potential issues that were previously obscured. Through an analysis of thousands of experiments on synthetic and real-world graphs, we show that several conventional graph models degenerate in exciting and informative ways. We believe that the observed degenerative patterns are clues to the future development of better graph models.