论文标题

科学计算中的物理学引导,物理信息和物理编码的神经网络

Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks in Scientific Computing

论文作者

Faroughi, Salah A, Pawar, Nikhil, Fernandes, Celio, Raissi, Maziar, Das, Subasish, Kalantari, Nima K., Mahjour, Seyed Kourosh

论文摘要

计算能力的最新突破使使用机器学习和深度学习来推进许多领域的科学计算,包括流体力学,固体力学,材料科学等。尤其是神经网络在这种杂交中起着核心作用。由于它们的内在体系结构,当数据稀疏时,传统的神经网络无法成功训练和范围,这在许多科学和工程领域都是如此。尽管如此,神经网络在培训过程中尊重物理驱动或基于知识的约束奠定了坚实的基础。一般而言,有三个不同的神经网络框架来强制实施潜在的物理:(i)物理引导的神经网络(PGNN),(ii)物理信息信息,物理信息信息(PINNS)和(iii)物理学编码的神经网络(penns)。这些方法为加速复杂的多尺度多物理现象的数值建模提供了不同的优势。此外,神经操作员(NOS)的最新发展为这些新的仿真范式增加了另一个维度,尤其是在需要对复杂多物理系统实时预测的情况下。所有这些模型还具有自己独特的缺点和限制,需要进一步的基础研究。这项研究旨在介绍对科学计算研究中使用的四个神经网络框架(即PGNN,PINNS,PENNS和NOS)的评论。审查了最先进的体系结构及其应用程序,讨论了局限性,并在改善算法方面进行了未来的研究机会,考虑了因果关系,扩大应用程序,并介绍了科学和深度学习求解器。这项关键评论为研究人员和工程师提供了一个可靠的起点,以理解如何将不同的物理层整合到神经网络中。

Recent breakthroughs in computing power have made it feasible to use machine learning and deep learning to advance scientific computing in many fields, including fluid mechanics, solid mechanics, materials science, etc. Neural networks, in particular, play a central role in this hybridization. Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data is sparse, which is the case in many scientific and engineering domains. Nonetheless, neural networks provide a solid foundation to respect physics-driven or knowledge-based constraints during training. Generally speaking, there are three distinct neural network frameworks to enforce the underlying physics: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). These methods provide distinct advantages for accelerating the numerical modeling of complex multiscale multi-physics phenomena. In addition, the recent developments in neural operators (NOs) add another dimension to these new simulation paradigms, especially when the real-time prediction of complex multi-physics systems is required. All these models also come with their own unique drawbacks and limitations that call for further fundamental research. This study aims to present a review of the four neural network frameworks (i.e., PgNNs, PiNNs, PeNNs, and NOs) used in scientific computing research. The state-of-the-art architectures and their applications are reviewed, limitations are discussed, and future research opportunities in terms of improving algorithms, considering causalities, expanding applications, and coupling scientific and deep learning solvers are presented. This critical review provides researchers and engineers with a solid starting point to comprehend how to integrate different layers of physics into neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源