论文标题

大规模3D地震地平线跟踪的完全可逆的神经网络

Fully reversible neural networks for large-scale 3D seismic horizon tracking

论文作者

Peters, Bas, Haber, Eldad

论文摘要

在地震图像或3D体积中跟踪地平线是地震解释的组成部分。在过去的几十年中,从1D痕迹的浅网络开始,到使用神经网络执行此任务的进展,再到大型2D图像的更深的卷积神经网络。由于地质结构本质上是3D,因此我们希望看到3D地震数据立方体上的训练网络的改进的地平线跟踪。尽管有一些用于各种地震解释任务的3D卷积神经网络,但由于内存限制,它们仅限于浅网络或相对较小的3D输入。网络状态和权重的所需内存随着网络深度而增加。我们提出了一个完全可逆的网络,用于地平线跟踪,其内存需求与网络深度无关。为了解决有关网络权重的记忆问题,我们使用直接以分解形式训练的层。因此,我们可以维持大量的网络渠道,同时保持低卷积内核的数量。我们使用保存的内存来按数量级的顺序增加数据的输入大小,以便网络可以更好地从数据中的大结构中学习。字段数据示例验证了所提出的网络结构适合地震地平线跟踪。

Tracking a horizon in seismic images or 3D volumes is an integral part of seismic interpretation. The last few decades saw progress in using neural networks for this task, starting from shallow networks for 1D traces, to deeper convolutional neural networks for large 2D images. Because geological structures are intrinsically 3D, we hope to see improved horizon tracking by training networks on 3D seismic data cubes. While there are some 3D convolutional neural networks for various seismic interpretation tasks, they are restricted to shallow networks or relatively small 3D inputs because of memory limitations. The required memory for the network states and weights increases with network depth. We present a fully reversible network for horizon tracking that has a memory requirement that is independent of network depth. To tackle memory issues regarding the network weights, we use layers that train in a factorized form directly. Therefore, we can maintain a large number of network channels while keeping the number of convolutional kernels low. We use the saved memory to increase the input size of the data by order of magnitude such that the network can better learn from large structures in the data. A field data example verifies the proposed network structure is suitable for seismic horizon tracking.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源