论文标题
学习具有随机梯度下降的蒙特卡洛模拟中的随机变量:参数PDE的机器学习和财务导数定价
Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing
论文作者
论文摘要
在金融工程中,金融产品的价格大约在每个交易日大约多次计算,并且每个计算中的参数略有不同。在许多财务模型中,这些价格可以通过蒙特卡洛(MC)模拟近似。为了获得良好的近似值,MC样本量通常需要大量大量,从而导致较长的计算时间以获得单个近似值。在本文中,我们引入了一种新的近似策略,以解决参数近似问题,包括上述参数财务定价问题。本文提出的近似策略的一个核心方面是将MC算法与机器学习技术相结合,以大致说明MC模拟中的随机变量(LRV)。换句话说,我们采用随机梯度下降(SGD)优化方法来训练标准人工神经网络(ANN)的参数,而是学习MC近似中出现的随机变量。与标准的MC模拟,准蒙特卡罗模拟,经过SGD训练的浅ANN和经过SGD训练的深ANN相比,我们在数值上测试了LRV策略的各种参数问题,并具有令人信服的结果。我们的数值模拟强烈表明,LRV策略可能能够克服$ l^\ infty $ norm在某种情况下,在某种情况下,已经证明标准深度学习方法无法做到这一点。这与科学文献中建立的低界限并不矛盾,因为这种新的LRV策略不在科学文献中已经建立下界限的算法类别。拟议的LRV策略具有一般性质,不仅限于上述参数财务定价问题,而且适用于大量近似问题。
In financial engineering, prices of financial products are computed approximately many times each trading day with (slightly) different parameters in each calculation. In many financial models such prices can be approximated by means of Monte Carlo (MC) simulations. To obtain a good approximation the MC sample size usually needs to be considerably large resulting in a long computing time to obtain a single approximation. In this paper we introduce a new approximation strategy for parametric approximation problems including the parametric financial pricing problems described above. A central aspect of the approximation strategy proposed in this article is to combine MC algorithms with machine learning techniques to, roughly speaking, learn the random variables (LRV) in MC simulations. In other words, we employ stochastic gradient descent (SGD) optimization methods not to train parameters of standard artificial neural networks (ANNs) but to learn random variables appearing in MC approximations. We numerically test the LRV strategy on various parametric problems with convincing results when compared with standard MC simulations, Quasi-Monte Carlo simulations, SGD-trained shallow ANNs, and SGD-trained deep ANNs. Our numerical simulations strongly indicate that the LRV strategy might be capable to overcome the curse of dimensionality in the $L^\infty$-norm in several cases where the standard deep learning approach has been proven not to be able to do so. This is not a contradiction to lower bounds established in the scientific literature because this new LRV strategy is outside of the class of algorithms for which lower bounds have been established in the scientific literature. The proposed LRV strategy is of general nature and not only restricted to the parametric financial pricing problems described above, but applicable to a large class of approximation problems.