论文标题
一种计算随机重量神经网络逆参数PDE问题的方法
A Method for Computing Inverse Parametric PDE Problems with Random-Weight Neural Networks
论文作者
论文摘要
我们提出了一种基于随机神经网络的逆参数PDE的计算反参数和解决方案字段的方法。这扩展了最初为远期PDE开发的局部极端学习机技术到反问题。我们开发了三种算法来训练神经网络以解决逆PDE问题。第一种算法(NLLSQ)通过带有扰动的非线性最小二乘法(NLLSQ-Perturb)确定逆参数和可训练的网络参数一起。第二算法(VARPRO-F1)通过可变投影从总体问题中消除了反向参数,以使有关可训练网络参数的减少问题。它首先通过nllsq-perturb算法解决可训练网络参数的减少问题,然后通过线性最小二乘法计算逆参数。第三算法(VARPRO-F2)通过可变投影从总体问题中消除了可训练的网络参数,以便仅在逆参数上找到减少的问题。它首先解决了逆参数的减少问题,然后以后计算可训练的网络参数。从某种意义上说,varpro-f1和varpro-f2是彼此相交的。所示的方法为反PDE问题产生准确的结果,如本文数值示例所示。对于无噪声数据,随着排列点的数量或可训练网络参数的数量增加,反向参数的误差和解决方案场的误差呈指数减小,并且可以达到接近机器精度的水平。对于嘈杂的数据,与无噪声数据相比,准确性降低了,但是该方法仍然非常准确。提出的方法已与物理信息的神经网络方法进行了比较。
We present a method for computing the inverse parameters and the solution field to inverse parametric PDEs based on randomized neural networks. This extends the local extreme learning machine technique originally developed for forward PDEs to inverse problems. We develop three algorithms for training the neural network to solve the inverse PDE problem. The first algorithm (NLLSQ) determines the inverse parameters and the trainable network parameters all together by the nonlinear least squares method with perturbations (NLLSQ-perturb). The second algorithm (VarPro-F1) eliminates the inverse parameters from the overall problem by variable projection to attain a reduced problem about the trainable network parameters only. It solves the reduced problem first by the NLLSQ-perturb algorithm for the trainable network parameters, and then computes the inverse parameters by the linear least squares method. The third algorithm (VarPro-F2) eliminates the trainable network parameters from the overall problem by variable projection to attain a reduced problem about the inverse parameters only. It solves the reduced problem for the inverse parameters first, and then computes the trainable network parameters afterwards. VarPro-F1 and VarPro-F2 are reciprocal to each other in a sense. The presented method produces accurate results for inverse PDE problems, as shown by the numerical examples herein. For noise-free data, the errors for the inverse parameters and the solution field decrease exponentially as the number of collocation points or the number of trainable network parameters increases, and can reach a level close to the machine accuracy. For noisy data, the accuracy degrades compared with the case of noise-free data, but the method remains quite accurate. The presented method has been compared with the physics-informed neural network method.