Applying Conditional Generative Adversarial Networks for Seismic Data Reconstruction
Abstract
Due to physical or financial constraints, seismic datasets can be often undersampled. Additionally, gaps in coverage, noisy traces, and irregular trace spacing are common problems in land and marine surveys which can hinder the geological interpretation of an area of interest. However, having a dense and regularly sampled data is becoming increasingly important in seismic processing. On the other hand, there is a rising interest in Convolutional Neural Networks (CNNs) in many research works involving seismic images. CNNs can be regarded as a sequence of convolutions (filters) that are applied on the input images and that are learned by the network during training.
In this work, we assess the performance of cGANs (Conditional Generative Adversarial Networks) for the data reconstruction problem in post-stack seismic datasets. In addition to increasing spatial density, post-stack interpolations can also be used to reconstruct entire sections by interpolating neighboring traces, reducing field costs. Generative Adversarial Networks (GANs) have been widely employed in the computer vision community in the last few years. They are mainly composed of two networks: a generator (G) that outputs synthesized images, and a discriminator (D) that determines if an input image is synthesized or a real one. Both networks are trained in an adversarial scheme. While G tries to learn how to produce realistic images to fool D, D tries to discriminate between synthesized and real images correctly.
As applications, we consider the reconstruction of missing traces and resolution improvement for the Netherlands F3 public seismic dataset. For the missing data application, we evaluated the correlations of reconstructed traces for gaps with 4, 8, 12, 16, 20 and 24 traces. The results show up to 0.76 of correlation for 4-trace gaps. For the resolution improvement application (downscaling) we lowered the resolution of the seismic dataset - 50%, 33% and 25% in both horizontal and vertical directions - and validated the results against the original data. To perform the comparisons, we used SSIM (structural similarity), MSE (mean squared error) and LBP (local binary patterns) texture descriptor. The results show that cGANs outperform cubic interpolation by up to 76% and that the texture descriptor was able to better capture image similarities, producing results more coherent with the visual perception.
AAPG Datapages/Search and Discovery Article #90350 © 2019 AAPG Annual Convention and Exhibition, San Antonio, Texas, May 19-22, 2019