论文标题

离散PDE的神经闭合模型的比较

Comparison of neural closure models for discretised PDEs

论文作者

Melchers, Hugo, Crommelin, Daan, Koren, Barry, Menkovski, Vlado, Sanderse, Benjamin

论文摘要

最近提出了一种神经闭合模型,作为一种有效近似神经网络多尺度系统的小尺度的方法。损失功能和相关训练程序的选择对所得神经闭合模型的准确性和稳定性产生了很大影响。在这项工作中,我们系统地比较了三个不同的过程:“衍生拟合”,“轨迹拟合”与离散的,然后优化,并以优化的方式进行“轨迹拟合”。从概念上讲,衍生品拟合是最简单和计算上最有效的方法,并且被认为在一个测试问题之一(库拉莫托 - 萨瓦辛斯基)上表现出色,但在另一个测试问题(汉堡)上很差。轨迹拟合在计算上更昂贵,但更健壮,因此是首选方法。在这两个轨迹拟合程序中,离散 - 优势的方法比优化的模型更准确。虽然优化的方法仍然可以产生准确的模型,但在选择用于训练的轨迹的长度时,必须小心,以便在长期行为上训练模型,同时在训练过程中仍会产生合理准确的梯度。两个现有定理以一种新颖的方式解释,该定理可以根据短期的准确性来洞悉神经闭合模型的长期准确性。

Neural closure models have recently been proposed as a method for efficiently approximating small scales in multiscale systems with neural networks. The choice of loss function and associated training procedure has a large effect on the accuracy and stability of the resulting neural closure model. In this work, we systematically compare three distinct procedures: "derivative fitting", "trajectory fitting" with discretise-then-optimise, and "trajectory fitting" with optimise-then-discretise. Derivative fitting is conceptually the simplest and computationally the most efficient approach and is found to perform reasonably well on one of the test problems (Kuramoto-Sivashinsky) but poorly on the other (Burgers). Trajectory fitting is computationally more expensive but is more robust and is therefore the preferred approach. Of the two trajectory fitting procedures, the discretise-then-optimise approach produces more accurate models than the optimise-then-discretise approach. While the optimise-then-discretise approach can still produce accurate models, care must be taken in choosing the length of the trajectories used for training, in order to train the models on long-term behaviour while still producing reasonably accurate gradients during training. Two existing theorems are interpreted in a novel way that gives insight into the long-term accuracy of a neural closure model based on how accurate it is in the short term.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源