Los Alamos researchers are the first to successfully demonstrate a machine-learning-based seismic imaging technique applying to real data. Once this model is fully trained, it can significantly reduce the computation time and yield more accurate models of subsurface geology.
Why it matters
Accurately and efficiently characterizing subsurface geology is crucial for various applications, such as energy exploration, civil infrastructure, and groundwater contamination and remediation. The standard approach to obtaining this information is through computational seismic imaging, which involves reconstructing an image of subsurface structures from measurements of natural or artificially produced seismic waves.
Inspired by recent successes in applying deep learning to computer vision and medical problems, deep-learning-based data-driven methods have been applied to seismic imaging problems. Several encoder-decoder networks have been developed to reconstruct the subsurface structure from seismic data. Those deep-learning models are end-to-end, meaning that they use the seismic waveform data as the input and directly output its corresponding subsurface structure.
Once those models are fully trained, the inversion procedure is computationally efficient. However, a significant limitation of these data-driven methods is their weak generalization ability, which hinders the wide application of data-driven seismic imaging approaches to field data. Furthermore, it can be extremely difficult and expensive to collect real subsurface structure models and their corresponding seismic measurements, which results in training sets with limited representativeness.
How they did it
To overcome the weak generalization issue, Los Alamos scientists Shihang Feng, Youzuo Lin and Brendt Wohlberg explored the possibility of enriching the training set and incorporating critical physics phenomena into their predictive model.
Their idea was inspired by the artistic style transfer problems from the computer vision community, the goal of which is to transfer the art style of one painting to another image by minimizing the style loss and the content loss based on features extracted from a pretrained convolutional neural network.
Those tools, therefore, provided the researchers with approaches to bridge images from two different physical domains. Specifically, subsurface structure models represent the geophysical properties in 2-D, which can be also viewed as images of a certain physical property. The method the researchers used converted a large volume of existing natural images into subsurface structure models with predetermined geologic styles (see image). In such a manner, their method was able to generate a large number of synthetic subsurface velocity models with sufficient variability. This, in turn, not only helped their data-driven models learn the governing physics of the problem through training but also yielded high generalization ability due to the richness of the data representativeness.
By successfully demonstrating this technique, the researchers have opened the door to a number of potential subsurface applications, such as carbon capture and sequestration, subsurface energy exploration, estimating pathways of subsurface contaminant transport, and earthquake early warning systems to provide critical alerts.
Read the full paper in IEEE Transactions on Geoscience and Remote Sensing here.