Welcome presentation
02:00 PM
Details
02:10 PM
Details
02:29 PM


Chairs: Rached Abdelkhalak and Saber Feki

Details
02:30 PM

Traditionally, reservoir simulation has benefited from increased resolutions and incorporating more physics, hence the evolution from million cells-sized reservoir simulators, billion and now trillion cells. Instead of the traditional route this talk will discuss the future of reservoir simulation in the context of Machine Learning (ML). With examples of incorporating ML to flash calculations, ML-based well placement and intelligent adaptive mesh refinement, the talk will demonstrate where ML can have an impact on reservoir simulation while maintaining fidelity of results. The replacement of parts of the simulator, or the entire simulators, by surrogate, or proxy models that are learned from physics and run at a fraction of the computational cost could be in our near future.

Suha Kayum

Saudi Aramco

Details
02:50 PM

The growing volume of digital information on reservoir monitoring and development opens new opportunities for more efficient exploitation and decision making. The lecture will demonstrate the use of machine learning in a number of typical reservoir modelling workflow tasks: identifying the geological features from seismic and outcrop imagery interpretation under uncertainty; description complex geological heterogeneity with respect to uncertainty; reservoir dynamics forecasting and model update based on production data analytics; making optimal decisions on the development of natural resources based on a forecast under conditions of uncertainty.

Vasily Demyanov

Heriot-Watt University

Details
03:10 PM

In this talk we will cover in detail the software components of accelerated machine learning and deep learning that are useful for geoscience workflows. We will cover the NVDIA tools for data pre-processing such as the NVIDIA DALI api, the GPU accelerated frameworks for training and multi-GPU training, and the tools we use for inference like the NVIDIA Triton inference server. We will also describe our contributions to some useful machine learning paradigms that are used in geoscience such as our tools for federated learning, end-to-end training, and transfer learning. We will finish the talk with some examples of implementation of ML based seismic interpretation.

Issam Said

NVIDIA

Details
03:30 PM

The Physics-informed neural networks (PINNs) have received careful attention in various science and engineering disciplines. This technology has the capability to constraint the neural networks to honor the governing laws of physics, often described by partial differential equations (PDEs). Thanks to its inherently accurate and efficient automatic differentiation (AD), differential operators can be evaluated at random points in the computational domain without a need for temporal or spatial discretizations. This feature provides superiority to numerical methods that often exhibit gridding artifacts and discretization errors. The performance of PINNs has been demonstrated in various CFD applications, including inverse problems. A vital unsettled question is about the applicability and performance of PINNs in solving forward and inverse problems for multiphase fluid low in porous media, and whether this technology has the potential to replace the traditional computational methods such as finite different and finite element methods. In this talk, we address this hot question by reviewing the favorable features and capabilities of PINNs in solving general PDEs, and we highlight some of its major limitations that must be overcome to be applicable to model fluid flow problems in porous media.

Details
03:50 PM
Details
04:29 PM


Chair: Weichang Li and Xiangliang Zhang

Details
04:30 PM

Robust estimation of rock properties, such as porosity, density, etc., from geophysical data, i.e. seismic and well logs, is essential in the process of subsurface modeling and reservoir engineering workflows. We present a semi-supervised learning workflow for static reservoir property estimations from 3D seismic and sparse wells that are available in a given study area. The method consists of two steps: (1) unsupervised feature engineering on the 3D seismic and (2) supervised integration of seismic with well logs, each of which is implemented using convolutional neural networks (CNNs). Specifically, the first CNN aims at understanding the 3D seismic data in an unsupervised way and extracting the regional features present in the study area, while the second CNN aims at constructing the optimal non-linear mapping between the wells and the seismic patterns at the well locations. Both components are connected with embedding the first CNN into the second CNN, which enforces the use of regional seismic features while building the seismic-well mapping relationship and thus helps significantly reduce the risk of overfitting due to limited wells as well as improve the quality of machine prediction. Moreover, the proposed workflow allows incorporating any additional information (e.g., structure) as constraints, which is expected to further improve the machine learning prediction, particularly in the case when wells are limited. The proposed workflow is applied to multiple datasets for performance evaluation. The good match between the machine prediction and the well logs verifies the capability of the proposed workflow in providing reliable seismic and well integration and delivering reliable reservoir property models. it provides nearly one-click solution to obtain 3D rock property distribution from seismic and well data in a study area.

Haibin Di

Schlumberger

Details
04:50 PM

Modern AI is revolutionizing almost every industry. Deep Learning techniques enabled breakthroughs in Computer Vision and now we see Intelligent Video Analytics (IVA) applications reaching a plateau of productivity. Recently, the ability to train large Deep Learning models is Natural Language Processing (NLP) opened way new disruptive results. All these exciting innovation ecosystem is pushing the global computing capacity to the limit. NVIDIA is the major computing company supporting most of these advances. NVIDIA platform for AI and HPC is a full-stack approach where Hardware and Software evolve together to meet the performance requirements of the most demanding Digital Science and Enterprise Solutions. In this presentation I'll give an overview of key NVIDIA software and hardware components inside Deep Learning frameworks (TensorFlow & Pytorch) and NVIDIA applications platforms: RAPIDS (Machine Learning), Metropolis (IVA), Jarvis (Conversational AI), Merlin (Recommender Systems), Morpheus (Cybersecurity), etc. I'll show the benefits of our platform in real-world applications in Geoscience.

Details
05:10 PM

Open Seismic is an open-source sandbox environment for developers/geoscientists in oil & gas to perform deep learning inference on 2D/3D seismic data. Open Seismic will help geoscientists to quickly evaluate deep learning models. Open Seismic performs deep learning inference on seismic data with optimizations by OpenVINO™ Toolkit. Open Seismic is built using Docker technology. Open Seismic is basically OpenVINO™ containerized and can run on any Linux platform. It includes reference models for common tasks like Fault Detection, Salt Identification and Facies Classification. It can ingest seismic data from various sources like on-prem, cloud, or OSDU data platform. Open Seismic is released under the Apache License 2.0. https://openseismic.readthedocs.io/

Details
05:30 PM

The world is faced today with increasingly large and challenging scientific, engineering, economical and societal problems. Meanwhile, supercomputers are evolving at a rapid pace and adapting to model-driven as well as data-driven workloads. With the convergence of HPC and AI and the unprecedented growth in its applications, the KAUST Supercomputing Core Laboratory is thriving to maintain state-of-the-art facilities to adequately support the research needs of KAUST faculty, students, researchers and partners. After an overview of KAUST HPC and AI infrastructure, I will highlight the research support, training, collaboration and other services provided through the technical expertise of our core laboratories scientists.

Saber Feki

KAUST

Details
05:50 PM
Details
06:30 PM
Details
Session III Panel discussion : "The future of ML in Geosciences and Engineering"
02:00 PM


In what parts of Geoscience and Engineering will ML make the biggest impact?
Is ML a fad or here to stay?
How can geoscience and engineering departments at Universities cope with the emerging ML trend?
Would the job market prefer an ML geoscience (engineering) degree (BS., MS., or even PhD.)?
What is next for ML in illuminating the Earth?

Moderator : Claire Birnie (KAUST)



Fatai Anifowose

Saudi Aramco

Steve Freeman

Schlumberger

Satyam Priyadarshy

Halliburton

Ashley Russel

Equinor

Details
02:59 PM


Chair: Hussein Hoteit

Details
03:00 PM

Solving for the frequency-domain scattered wavefield via physics-informed neural network (PINN) shows great potential in seismic modeling for its efficiency and flexibility. However, its application for high-frequency wavefield representation still has a long way to go. In this talk, I will discuss the challenges and introduce some of our recent progress to solve this problem and future research.

Details
03:10 PM

Big data analytics and large-scale simulations have followed largely independent paths to the high-performance computing frontier, but important opportunities now arise that can be addressed by combining the strengths of each. As a prominent big data application, geospatial statistics is increasingly performance-bound. We present Exascale GeoStatistics (ExaGeoStat) software, a high-performance library implemented on a wide variety of contemporary hybrid distributed-shared supercomputers whose primary target is climate and environmental prediction applications. Such software is destined to play an important role at the intersection of big data and extreme simulation by allowing applications with prohibitively large memory footprints to be deployed at scales worthy of the data on modern architectures by exploiting recent algorithmic developments in computational linear algebra.

In contrast to simulation-based on partial differential equations derived from first-principles modeling, ExaGeoStat employs a statistical model based on the evaluation of the Gaussian loglikelihood function, which operates on a large dense covariance matrix. A relatively small ensemble of expensive simulations can be used to parameterize a statistical model from which inexpensive emulations can be drawn after a parameter fitting process. For the dense covariance matrix operations of geospatial statistics to keep up with the growing scale of data sets from the sparse Jacobian operations of PDE simulations, data sparsity intrinsic in the physics must be identified and exploited. Parameterized by the Matern covariance function, the covariance matrix is symmetric and positive definite. The computational tasks involved during the evaluation of the Gaussian log-likelihood function become daunting as the number n of geographical locations grows, as O(n2) storage and O(n3) operations are required. While ExaGeoStat’s distributed capability extends traditional “exact” linear algebra approaches, the library supports several approximate techniques that reduce the complexity of the maximum likelihood operation and while respecting user-specified accuracy. For example, ExaGeoStat supports the Tile Low-Rank (TLR) approximation technique which exploits the data sparsity of the dense covariance matrix by compressing the off-diagonal tiles up to a user-defined accuracy threshold. Because many environmental characteristics show a spatial continuity, i.e., data at two nearby locations are on average more similar than data at two widely spaced locations, other approximations become valid and are provided by ExaGeoStat such as diagonal super tile and mixed-precision approximation methods, whereby the less significant correlations that comprise the vast majority of entries in the covariance matrix are stored in lower precisions than the defaults for tightly coupled degrees of freedom.

Details
03:20 PM

Following the rapid growth of unconventional resources, petroleum engineers have been focusing on the use of various tools to predict the performances and operational lives of unconventional reservoirs. Several studies have used machine learning (ML) algorithms to improve the productivity of reservoir fields. However, owing to a lack of stability and other limitations of ML in regard to long-term forecasts, including the occurrence of unphysical results, reservoir engineers often do not trust ML.

In this work, we present a new workflow for automating a decline curve analysis (DCA) calculation in a more robust way, and for predicting the production from new wells using a state-of-the-art Bayesian neural ordinary differential equation (ODE). This provides a powerful framework for modeling physical simulations, even when the ODEs governing the system are not explicitly defined. This study utilizes publicly available databases from the Bakken Shale Formation to develop a novel ML predictive modeling method for connecting well-completion-related and geological variables to the parameters of a stretched exponential decline model (SEDM). These SEDM-estimated parameters are integrated with a Bayesian neural ODE framework based on Bayesian inference. A "No-U-Turn" Markov chain Monte Carlo (MCMC) sampler (denoted "NUTS") is used to rapidly predict the decline curves for new or existing wells, without the need for costly reservoir simulators. This methodology is found to be accurate for predicting the decline rates of new wells. Depending on the data obtained from existing wells, this method can also be used to predict the ultimate recovery from a new well.

To the best of our knowledge, this is the first study to simultaneously employ both the Bayesian neural ODE and ML algorithm to predict and analyze functional capabilities based on decline curve parameters

Amine Tadjer

University of Stavanger

Details
03:30 PM

Petrographic thin section analysis is a critical part of subsurface reservoir characterization and is widely used for initial estimation of porosity and pore types. Compared to micro-CT and SEM images, thin sections are relatively easy and cheap to obtain.In this study we present an optimized machine learning based thin section image analysis workflow that offers an inexpensive and fast approach to pore network characterization, compared to the more expensive and less accessible micro-CT or FIB-SEM techniques. We applied this methodology to carbonate rock samples from Upper Jurassic Jubayla Formation depositionally equivalent to the lower part of the super-giant Arab-D reservoirs in Saudi Arabia. The first step is pre-processing and segmentation of color (RGB) thin section images into binary image representing the pore and solid phases. We tested three machine learning methods for segmentation; 1) K-Means Cluster, 2) Random Forest and 3) Support Vector Machine (SVM). We then applied a numerical reconstruction method to obtain a 3D pore volume based on the 2D thin section images. 2D segmented images were used as training images to generate the 3D pore structure by multiple point statistics (MPS), which is one of the most effective ways to reconstruct 3D porousmedia based on 2D images.We thenextracteda Pore Network Model (PNM) from the reconstructed 3D pore volume using media axis algorithm.We find that the choice of image segmentation method has a significant impact on the final digital rock analysis results.2D to 3D reconstruction by MPS effectively reproduced the connectivity of the macropores in the studied rock sample. Although pore network modelling simplified the porous structure, the topological features were maintained. Pore size distribution and permeability calculated from the extracted pore network model matched well with the laboratory measured data from the Upper Jurassic Jubayla Formation carbonate rocks.The digital image analysis methodology thus applies machine learning for image processing and classification of thin section images for reliable pore network characterization.

Xin Liu

KAUST

Details
03:40 PM

Digital Rock Physics relies on the availability of high resolution, large-size 3D digital rock images. In practice, there is always a trade-off between the size and resolution of the acquired images. Moreover, it is time-consuming to acquire high-quality digital rock images using imaging techniques like X-ray micro-Computed Tomography(micro-CT) and Scanning Electron Microscope (SEM). In this paper, we propose a ML-aided 3D reconstruction method that allows to reduce the sampling rate along the axial direction during image acquisition. Considering the linearity of the latent space learned by a Progressive Growing Generative Adversarial Network (PG-GAN), we reconstruct the missing part between slices scanned at large constant intervals via linear interpolation in the latent space learned by the PG-GAN. We apply our method to reconstructing the 3D image of an Estaillades carbonate rock sample. Both the reconstructed image and the extracted pore network are visually indistinguishable from the ground truth. Overall, our method saves imaging time and cost significantly, enables efficient imaging editing in PG-GAN’s linear latent space as well as the utilization of SEM images in 3D reconstruction for enhanced image quality, offers highly efficient compression of the image data, and enlarges the digital rock repository for ML research.

Nan You

National University of Singapore

Details
03:50 PM

Time-lapse or 4D seismic survey is a crucial monitoring tool for CO2 geological sequestration. Conventional time-lapse interpretation provides detailed characterization of CO2 distribution in the storage unit. However, manual interpretation is labour-intensive and often inconsistent throughout the long monitoring history, due to the inevitable changes in seismic acquisition and processing technology and interpreter’s subjectivity. We propose a neural network (NN)-based interpretation method that translates baseline and monitoring seismic images to the probability of CO2 presence. We use a simplified 3D U-Net, whose training, validation and testing are all based on the Sleipner CO2 storage project. The limited labels for training are derived from the interpreted CO2 plume outlines within the internal sandstone layers for 2010. Then we apply the trained NN on different time-lapse seismic datasets from 1999 to 2010. The results suggest that our NN-based CO2 interpretation has the following advantages: (1) high interpretation efficiency by automatic end-to-end mapping; (2) robustness against the processing-induced mismatch between the baseline and time-lapse inputs, relaxing the baseline reprocessing demands when compared to newly acquired or reprocessed time-lapse datasets; and (3) inherent interpretation consistency throughout multiple vintage datasets. Testing results with crafted time-lapse images unveil that the NN takes both amplitude difference and structural similarity into account for CO2 interpretation. We also compare 2D and 3D U-Nets under the scenario of sparse 2D labels for training. The results suggest that the 3D U-Net provides more continuous interpretation at the cost of larger computational resources for training and application.

Bei Li

National University of Singapore

Details
04:00 PM
Details
04:29 PM


Chairs: Ebru Bozdag and Daniel Peter

Details
04:30 PM

Seismic structural interpretation and structural model building are important steps for us to interpret and understand the subsurface geologic structures. We facilitate these steps by using deep learning methods to 1) automatically identify faults, horizons, and geobodies from seismic images; 2) automatically and efficiently build implicit structural models from interpreted sparse and incomplete structural data (e.g., points, segments, and patches); 3) build rock property models by integrating seismic structures and well-log properties.

Xinming Wu

University of Science and Technology of China

Details
04:50 PM

It is common practice in seismic data processing to select parameters on a regular grid. For example, first breaks may be picked every kilometer, or denoising parameters may be selected every one hundred shots. This is labour intensive, usually requiring several months of picking and parameter tuning for each seismic dataset. It is also wasteful, as the parameters often do not change substantially between the chosen locations, and may lead to poor results as sudden changes in the best parameter value might be missed. Active learning enables a more efficient use of human effort by instead directing attention to the samples where the appropriate parameter values are most uncertain. It is a data-driven approach that does not require pre-training and is designed to complement human expertise, avoiding difficulties that hamper the use of other machine learning techniques in production. I will discuss the forms of active learning that are relevant for seismic processing and present an application of it on a real dataset.

Alan Richardson

Ausar Geophysical

Details
05:10 PM

The analysis of 3-D seismic reflection data (and their corresponding seismic attributes) is an essential tool for hydrocarbon exploration projects. In this talk we review some work done when applying Machine learning to 3-D seismic reflection data to estimate some petrophysical properties given by the well-logs. The results correspond to an oil field in Colombia called Tenerife located in the Middle Magdalena Valley and we go from 2-D seismic sections to 3-D geo-cubes of petrophysical properties. Using some of these properties (such as the volume of clay) we obtain facies. We are able to identify paleochannels that are seen in the seismic data, its seismic attributes and in the resulting 3-D petrophysical estimations.

Ursula Iturraran-Viveros

National Autonomous University of Mexico (UNAM)

Details
05:30 PM

As we witness a surge of interest in exploring the potential of machine learning in scientific and engineering applications where there are often well established first principle models, curious questions regarding the interplay between machine learning and physics models become important both at conceptual level and for practical purposes, such as what are the connections and differences, the interactions of machine learning and physics in a collated framework, and the comparative advantage and limitations of each method. In this talk I will present several electromagnetic (EM) inversion cases where conventional inversion, machine learning based regression, and machine learning with physics constraints are applied and compared. In the case of machine learning with physics constraints, I will show how the prior and posterior density can be effectively shaped by alternating between minimizing the data and parameter space losses, respectively, to improve inversion resolution and accuracy.

Weichang Li

Aramco - Houston Research Center

Details
05:50 PM
Details
06:30 PM
Details
Session VI : Lightning Talks "Physics and pixels"
01:59 PM


Chair: Tariq Alkhalifah

Details
02:00 PM

Interpretation of fractures in raw outcrop maps is a tedious and time-consuming task. A few semi-automatic or automatic interpretation methods based on image processing are available; however, they are usually sensitive to the contrast of the image that, in turn, causes under or over-interpretation of fracture geometry. A successful interpretation of fractures from a raw outcrop image requires two stages: (1) conversion of a multi-bit per pixel raw outcrop image to a binary map that preserves fracture geometry and connectivity, and (2) replacement of the binary fracture images with line segments or polylines. These two stages are fracture recognition and fracture detection, respectively. We apply the U-net architecture to recognize fractures in a raw outcrop map. When 200 training epochs are applied to our images, the training accuracy reaches 0.94, while the mean square error decreases to 0.02. The implementation of U-net yields good results for fracture recognition. We propose a pixel-based fracture detection algorithm. The algorithm can automatically interpret the fractures in the recognized binary map as line segments or polylines. By combining fracture recognition and detection, we can interpret automatically fractures in a complex raw outcrop map.

Weiwei Zhu

Tsinghua University

Details
02:10 PM

Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data interpretation, processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Outcrop images and modelled dataset have been used to train and test the machine learning models. This dataset comprises three classes (buckle, chevron and conjugate) of folds types. These image datasets are used to investigate a wide range of shallow, deep and transfer machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets form a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.

Ramy Abdallah

University of Aberdeen

Details
02:20 PM

Since 2018, deep learning inversion methods of seismic data have made great progress. Currently, deep learning inversion methods are mostly data-driven and based on supervised learning. They rely on labels, i.e., actual velocity models, which are difficult to obtain in real sampling. To solve this problem, the acoustic wave equation is introduced into SeisInvNet, our previously proposed inversion network, which could perform forward modeling of the network prediction results to obtain the corresponding seismic data. The predicted seismic data and the acquired real data can be used to obtain a new loss function, called data loss, and the gradients of the network parameters can be computed, enabling the training of a DNN that does not depend on the actual velocity model. In this way, physical information is also introduced into the network to improve generalizability. The proposed network is trained by sampling 1200 sets of 2D models with 64×200 girds from the overthrust model. To provide stable initial parameters of the network, we used a linear initial model as bootstrap and gradually increased the weight of data loss to finally achieve a good unsupervised learning inversion effect. It is interesting to note that it is difficult to obtain accurate inversion results with a linear model as the initial model in traditional full-waveform inversion. We believe that it is very meaningful to use a large amount of data as a priori for the network to effectively improve the local optimum problem in traditional inversion.

Senlin Yang

Shandong University

Details
02:30 PM

Solving the wave equation numerically constitutes the majority of the computational cost for applications like seismic imaging and full waveform inversion. An alternative approach is to solve the frequency domain Helmholtz equation, which offers a reduction in dimensionality as it can be solved per frequency. However, challenges with the classical Helmholtz solvers such as the need to invert a large stiffness matrix can become computationally intractable for large and 3D models or high frequencies. therefore, a new approach based on physics informed neural networks paradigm have been proposed to solve the Helmholtz equation, but this method still needs further improvements. consequently, in this abstract, we study different activation functions in order to improve the convergence properties of this solution. We compare different activation functions that are regularly used in the literature, in addition to a new variant of ReLU, called swish activation function, and we find the swish offers much improved convergence properties than the other widely used activation functions.

Ali Al Safwan

King Fahd University of Petroleum and Minerals

Details
02:40 PM

Wavefield reconstruction inversion (WRI) method is a PDE-constrained optimization method that aims to mitigate the cycle-skipping issue in full-waveform inversion (FWI). WRI was proposed to implement in the frequency domain, as the size of wavefield at a certain frequency is the same as the model size. However, it will be very expensive to reconstruct the wavefield for a large model, especially for 3D problems. In addition, conventional numerical methods cannot invert for a frequency-domain wavefield with irregular topography. A recently introduced framework called physics-informed neural network (PINN) is used to predict PDE solutions by setting the physical equation as a cost function. PINN has shown its effectiveness in solving the Helmholtz wave equation specifically for the scattered wavefield. By including the recorded data at the sensors' locations as a constraint, PINN can predict a wavefield which simultaneously fits the recorded data and the Helmholtz wave equation with a given initial model. With the predicted wavefields, we can build another independent PINN aiming at inverting for the velocity. In this new PINN, we still use spatial coordinates as the input data, and use the predicted wavefields and background homogeneous velocity as complimentary variables to define the cost function. After a fully connected 8-layer deep neural network is trained, we are able to predict the velocity in the domain of interests. We demonstrate the validity of the proposed method on a layered model, and the results show that PINN can reconstruct the scattered wavefield and invert for a reasonable velocity model even with a single source and a single frequency.

Chao Song

Imperial College London

Details
02:50 PM

The impact of human development on our planet's climate and environment is a key concern for many scientists and policy makers. The abundance of satellite imagery provides us with a unique opportunity to study the global impact of human activity. Machine learning is an extremely useful tool for this analysis, as it provides a means to automatically extract and process huge amounts of data. In this study, Google Earth Engine was used to detect deforested areas within the Amazon. An image processing workflow was developed, and a simple pixel-wise classifier was trained. This classifier was applied to a small area of the Amazon for the years 2015-2019. Active areas of deforestation are identified using this technique.

Tim Taylor

Independent

Details
03:00 PM
Details
03:29 PM


Chairs: Jan van de Mortel and Claire Birnie

Details
03:30 PM

The popularity of the use of Machine Learning (ML) algorithms, such as neural networks, in geophysics has been increased hugely in recent years. This is due to access to faster computers, ready-to-use ML software and better machine learning approaches. The big question is: what will finally be the future of such ML algorithms? Will ML algorithms be able to derive the high-resolution subsurface parameters directly from the raw data? Should we replace all imaging and inversion methodologies by a huge amount of forward modeling exercises and use the ML approach to find the answer? I personally don’t think this will happen soon and also is probably not smart too. Often we see that when we replace deterministic approaches by ML algorithms, we basically start from scratch again, throw away all developed methodologies and finally hope that the ML will outperform the deterministic version, either in quality or speed or both. Therefore, I think it is better to embark as much as possible on deterministic methodologies, and try to improve them by ML methods. So they should augment the current approach and fill in their limitations. E.g. ML may not to be good enough to replace full blown 3D FWI, but it may provide a good initial model or improve its gradient, enabling better or faster convergence. It may not replace surface-related multiple prediction, but it can help fill in the missing data or guide the adaptive subtraction, as shown by examples. The use of ML can become even more exciting if we combine it with new data acquisition approaches: e.g. acquire one part of the survey in high-resolution mode, such it can serve as a learning data for the rest of the survey! Thus, I believe that - in combination with good geophysical domain knowledge - ML will play an increasingly prominent role to fill in those gaps that current deterministic methods or physical models cannot.

Eric Verschuur

Delft University of Technology

Details
03:50 PM

Seismic full-waveform inversion is a typical non-linear and ill-posed large-scale inverse problem. It is an important and widely used geophysical exploration method to obtain subsurface structures. Existing physics-driven computational methods for solving waveform inversion usually suffer from the cycle skipping and local minima issues, and not to mention that solving waveform inversion is computationally expensive. We recently developed several data-driven inversion techniques to reconstruct subsurface structures. Our data-driven inversion approaches are end-to-end frameworks that can generate high-quality subsurface structure images directly from the raw seismic waveform data. A series of numerical experiments are conducted on the synthetic seismic reflection data to evaluate the effectiveness of our methods. In this talk, I will discuss the pros and cons of physics-driven and data-driven inversion techniques. Particularly, I will compare the accuracy of the reconstruction as well as computational efficacy. Furthermore, I will also discuss the possibility of combining both types of methods with the hope of benefiting each other.

Youzuo Lin

Los Alamos National Laboratory

Details
04:10 PM

The posterior probability distribution provides a comprehensive description of the solution in ill-posed inverse problems. Sampling from the posterior distribution in the context of seismic imaging is challenged by the high-dimensionality of the unknown and the expensive-to-evaluate forward operator. These challenges limit the applicability of Markov Chain sampling methods due to the costs associated with the forward operator. Moreover, explicitly choosing a prior distribution that captures the true heterogeneity exhibited by the Earth's subsurface further complicates casting seismic imaging into a Bayesian framework. To handle this situation and to assess uncertainty, we propose a data-driven variational inference approach based on conditional normalizing flows (NFs). The proposed scheme leverages existing data, which are in the form of low- and high-fidelity migrated image pairs, to train a conditional NF capable of characterizing the posterior distribution. After training, the NF can be used to sample from the posterior distribution associated with a previously unseen seismic survey, which is in some sense close, e.g., data from a neighboring survey area. In our numerical example, we obtain high-fidelity images from the Parihaka dataset and low-fidelity images are derived from these images through the process of demigration, followed by adding band-limited noise and migration. During inference, given shot records from a new neighboring seismic survey, we first compute the reverse-time migration image. Next, by feeding this low-fidelity migrated image to the NF we gain access to samples from the posterior distribution virtually for free. We use these samples to compute a high-fidelity image including a first assessment of the image's reliability.

This is joint work with Gabrio Rizzuti, Mathias Louboutin, Philipp A. Witte, and Felix J. Herrmann.

Ali Siahkoohi

Georgia Institute of Technology

Details
04:30 PM

Neural network models in simple terms are nothing but mathematical functions of the inputs. Unlike functions represented by Fourier, wavelet or any other basis function, the parameters (coefficients) of the neural networks correspond to basis functions defined primarily by a stack of linear operations and activation functions. Thus, these parameters are evaluated through an optimization problem, and neural network models have proven to be good universal function approximators, granted the functions are continuous, like those describing wavefields, including our data. I will share examples of neural network functions that help us solve some of the outstanding challenges we face in waveform inversion. These include NN misfit functions, measuring the distance between the observed and synthetic data in a way that are trained to allow us to avoid cycle skipping. It also includes an NN wavefield functional solution to the wave equation that can also fit the data. The key feature of NN functions is their flexibility as they are optimized to implement specific tasks, and in our case these tasks are directed to support waveform inversion.

Details
04:50 PM
Details
05:29 PM


Chairs: Damian San Roman Alerigi and Hussein Hoteit

Details
05:30 PM

Numerical simulation in porous media relies on the discrete representation of equations that have been derived from conservation laws and constitutive relations. In many practical applications, the scope of these equations can be insufficient to realistically describe the dynamics of flow in complex porous media induced by sudden changes in boundary conditions and force terms (e.g., wells, aquifers). Nevertheless, the increasing affordability for collecting and storing large volumes of data is enabling possibilities to gain new insights and discover elusive physical relations missing from existing simulation models. In this presentation, we introduce a framework of combined physics-based and data-driven models, namely Physics-AI (or PhysAI) models, to reconstruct and predict the dynamics of fluid flow in diverse unconventional field scenarios. These models are designed to learn and identify spatiotemporal relationships from monitoring data as well as providing explainable interpretations in the form of differential equations if needed. Hence, the proposed approach differs from other well-known data-driven approaches that strictly rely on black-box solutions. Capabilities of PhysAI models are evaluated for reliably predicting state data (e.g., pressure and saturation) as well as for forecasting multi-well production data on multiphase and couple flow/geomechanics applications involving both synthetic and modest field data requirements. A stochastic optimization approach is coupled with the resulting PhysAI model for generating optimal in-fill, drawdown, and completion design recommendations in different unconventional fields.

Hector Klie

Rice University & DeepCast Co.

Details
05:50 PM
Details
06:00 PM

Petroleum Data Analytics (PDA) is the engineering application of Artificial Intelligence & Machine Learning in petroleum engineering related problem solving and decision-making. PDA will fully control the future of science and technology in the petroleum industry. It is highly important for the new generation of scientists and petroleum professionals to develop a scientific understanding of this technology. Similar to the application this technology in other engineering related disciplines, Petroleum Data Analytics addresses two major issues that determine the success or failure of this technology in our industry: (a) the differences between “engineering” and “non-engineering” problem solving and decision-making, and (b) how AI&ML is differentiated from traditional statistical analysis. Lack of success or mediocre outcomes of AI&ML in our industry has been quite common. To a large degree, this has to do with superficial understanding of this technology by some petroleum engineering domain experts and concentration on marketing schemes rather than science and technology.

Shahab Mohaghegh

West Virginia University & Intelligent Solutions Inc.

Details
06:20 PM
Details
06:30 PM
Details
Session IX : “Applications on wavefields and model building”
01:59 PM


Chairs: Andrew Long and Tariq Alkhalifah

Details
02:00 PM

In this talk, I will report some recent work in my group on applications of deep learning for seismic data denoising, interpolation, migration velocity analysis and velocity model building.

Jianwei Ma

Peking University

Details
02:20 PM

Diego Rovetta, Apostolos Kontakis, Daniele Colombo** *Delft Global Research Center, Aramco Overseas Company B.V. **Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco

Oil and gas exploration in desert environments requires an accurate description of the near surface, typically characterized by a complex geology strongly affecting the quality of the seismic data. High resolution details of the shallow subsurface can be obtained by analyzing the surface waves (SW) and their phase velocity variation with the frequency (dispersion curves). A detailed near surface velocity model can be obtained by the inversion of the dispersion curves, or by their joint inversion with other geophysical measurements. However, the necessary step of extracting manually the dispersion curves from the seismic data can be a highly inefficient and cumbersome task for large seismic surveys. We recently proposed to use machine learning techniques to automatize this extraction procedure and we tested different supervised (neural networks) and unsupervised (clustering) algorithms after integrating them in a workflow specifically designed to extract SW dispersion curves from the frequency-phase velocity spectrum. In particular, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) proved to be the best algorithm when it comes to balancing accuracy, robustness against noise, efficiency and automation. When tested on the SEAM Arid model synthetic dataset, this method extracts dispersion curves that are matching the theoretical ones with pretty good accuracy, and once inverted for velocities they are successfully recovering the near-surface features with high resolution. We also applied this algorithm to a field dataset acquired in a desert environment, providing geology-consistent velocities through single-domain SW inversion and first break travel times and SW joint inversion. Finally, we extended the picking workflow from the fundamental to higher-order modes. We believe that the integration of machine learning algorithms into the dispersion curve picking procedure makes it feasible to use SW information for a high resolution characterization of the near surface in complex geology environments.

Diego Rovetta

Aramco Overseas

Details
02:40 PM

Seismic traveltime tomography using transmission data is widely used to image the Earth's interior from global to local scales. In seismic imaging, it is used to obtain velocity models for subsequent depth-migration or full-waveform inversion. In addition, cross-hole tomography has been successfully applied for a variety of applications, including mineral exploration, reservoir monitoring, and CO2 injection and sequestration. Conventional tomography techniques suffer from a number of limitations, including the use of a smoothing regularizer that is agnostic to the physics of wave propagation. Here, we propose a novel tomography method to address these challenges using developments in the field of scientific machine learning. Using seismic traveltimes observed at seismic stations covering part of the computational model, we train neural networks to approximate the traveltime factor and the velocity fields, subject to the physics-informed regularizer formed by the factored eikonal equation. This allows us to better compensate for the ill-posedness of the tomography problem compared to conventional methods and results in a number of other attractive features, including computational efficiency. We show the efficacy of the proposed method and its capabilities through synthetic tests for surface seismic and cross-hole geometries. Contrary to conventional techniques, we find the performance of the proposed method to be agnostic to the choice of the initial velocity model.

Details
03:00 PM

Can we learn robust latent representations directly from seismic data, which can then act as natural priors at various processing stages? In this talk we will report on our initial results showing the superior performance of NN-based representations against ad-hoc transforms (e.g., fk, curvelets) for tasks such as joint data reconstruction and receiver deghosting. We will however also show that this superiority is not magic and is only achieved by carefully choosing the network architecture and hyperparameters.

Details
03:20 PM
Details
03:59 PM


Chairs: Lukas Mosser and Matteo Ravasi

Details
04:00 PM

Standard earthquake monitoring workflows can be described with the following sequence of steps: (1) pre-processing, (2) phase detection, (3) event detection/phase association, (4) event location, and (5) event characterization. Neural networks have been shown capable of replacing each of these steps, and in some cases lead to dramatic improvement. In this talk I will present progress on using machine learning on continuous ground motion recorded across a seismic network to generate earthquake catalogs that are far more comprehensive than those developed using standard approaches. A combination of appropriate architecture, accurate data labels, and data augmentation all play an important role in developing effective models. The simplest approach to implement machine-learning-based monitoring is modular – to replace individual earthquake monitoring steps one-by-one with neural network models. There can be advantages, however, in combining steps in multi-task models to take advantage of contextual information. Moreover, it is possible to combine all steps in an end-to-end model, which could hold advantages over the modular approach. In this talk I will demonstrate each of these possibilities and illustrate them with real-world examples.

Greg Beroza

Stanford University

Details
04:20 PM

Deep neural networks have been leveraged in surprising ways in the context of computational inverse problems and imaging over the past few years. In this talk, I will explain how deep nets can sometimes generate helpful virtual “deepfake” data that weren’t originally recorded, but which extend the reach of inversion in a variety of ways. I will discuss two examples from seismic imaging: 1) bandwidth extension, which helps to convexify the inverse problem, and 2) “physics swap”, which helps to mitigate nuisance parameters. Joint work with Hongyu Sun, Pawan Bharadwaj, and Matt Li.

Details
04:40 PM

We propose a new metric that can address some of the shortcomings of widely use metrics that were originally designed for traditional ML tasks not related to our field. The motivation is automatic quality improvement measurement, which is not quite well addressed by our vibrant community. Also, we will cover a use case for the proposed metric that is related to transfer learning in the context of seismic inversion.

Details
05:00 PM

We present recent advances in revealing Earth’s interior via novel implicit neural representations and the introduction of injective flows (TRUMPETS) enabling UQ.

Joint research with I. Dokmani, A.E. Khorashadizadeh, K. Kothari and M. Puthawala.

Maarten de Hoop

Rice University

Details
05:20 PM
Details
06:00 PM


The best Lightning Talk awards
Summary and outlook

Details
06:30 PM
Details