论文标题
学会通过潜在的全球进化来加速部分微分方程
Learning to Accelerate Partial Differential Equations via Latent Global Evolution
论文作者
论文摘要
在许多科学和工程领域(例如流体动力学,天气预报及其反相反的优化问题)中,模拟大规模系统的部分微分方程(PDE)的时间演变至关重要。但是,由于它们的局部进化,因此经典的求解器和最近的基于深度学习的替代模型通常在计算中非常密集:他们需要在推断期间的每个时间步骤更新每个离散的单元的状态。在这里,我们开发了PDE(LE-PDE)的潜在进化,这是一种简单,快速和可扩展的方法,可以加速PDE的仿真和逆优化。 Le-Pde学习了系统的紧凑,全球表示,并通过学习的潜在进化模型在潜在空间中有效地演变出来。 Le-Pde通过在长时间推出期间更新的潜在维度要更新而与输入空间更新相比,可以实现加速。我们介绍了新的学习目标,以有效地学习这种潜在动态以确保长期稳定。我们进一步介绍了通过在潜在空间中通过反向传播来加速PDE的边界条件的反向优化的技术,以及一种退火技术来解决边界条件的非差异性和稀疏相互作用。我们以非线性PDE的1D基准测试我们的方法,2D Navier-Stokes流入湍流相,并在2D Navier-Stokes流中对边界条件进行反相反优化。与最先进的基于深度学习的替代模型和其他强大的基线相比,我们证明了更新的尺寸降低了128倍,速度提高了15倍,同时提高了竞争精度。
Simulating the time evolution of Partial Differential Equations (PDEs) of large-scale systems is crucial in many scientific and engineering domains such as fluid dynamics, weather forecasting and their inverse optimization problems. However, both classical solvers and recent deep learning-based surrogate models are typically extremely computationally intensive, because of their local evolution: they need to update the state of each discretized cell at each time step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs. LE-PDE learns a compact, global representation of the system and efficiently evolves it fully in the latent space with learned latent evolution models. LE-PDE achieves speed-up by having a much smaller latent dimension to update during long rollout as compared to updating in the input space. We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability. We further introduce techniques for speeding-up inverse optimization of boundary conditions for PDEs via backpropagation through time in latent space, and an annealing technique to address the non-differentiability and sparse interaction of boundary conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep learning-based surrogate models and other strong baselines, we demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.