Efficient handling of fault properties using the Juxtaposition Table Method

2020 ◽  
Vol 496 (1) ◽  
pp. 199-207 ◽  
Author(s):  
Tor Anders Knai ◽  
Guillaume Lescoffit

AbstractFaults are known to affect the way that fluids can flow in clastic oil and gas reservoirs. Fault barriers either stop fluids from passing across or they restrict and direct the fluid flow, creating static or dynamic reservoir compartments. Representing the effect of these barriers in reservoir models is key to establishing optimal plans for reservoir drainage, field development and production.Fault property modelling is challenging, however, as observations of faults in nature show a rapid and unpredictable variation in fault rock content and architecture. Fault representation in reservoir models will necessarily be a simplification, and it is important that the uncertainty ranges are captured in the input parameters. History matching also requires flexibility in order to handle a wide variety of data and observations.The Juxtaposition Table Method is a new technique that efficiently handles all relevant geological and production data in fault property modelling. The method provides a common interface that is easy to relate to for all petroleum technology disciplines, and allows a close cooperation between the geologist and reservoir engineer in the process of matching the reservoir model to observed production behaviour. Consequently, the method is well suited to handling fault property modelling in the complete life cycle of oil and gas fields, starting with geological predictions and incorporating knowledge of dynamic reservoir behaviour as production data become available.

2021 ◽  
pp. 9-22
Author(s):  
Yu. V. Vasiliev ◽  
M. S. Mimeev ◽  
D. A. Misyurev

The production of hydrocarbons is associated with a change in the physical and mechanical properties of oil and gas reservoirs under the influence of rock and reservoir pressures. Deformation of the reservoir due to a drop in reservoir pressure leads to the formation of various natural and man-made geodynamic and geomechanical phenomena, one of which is the formation of a subsidence trough of the earth's surface, which leads to a violation of the stability of field technological objects.In order to ensure geodynamic safety, a set of works is used, which includes analysis of geological and field indicators and geological and tectonic models of the field, interpretation of aerospace photographs, identification of active faults, construction of a predictive model of subsidence of the earth's surface of the field with identification of zones of geodynamic risk.This work was carried out to assess the predicted parameters of rock displacement processes during field development; even insignificant disturbances in the operation of technological equipment caused by deformation processes can cause significant damage.Prediction of rock displacements is possible only on the basis of a reservoir deformation model that adequately reflects the geomechanical and geodynamic processes occurring in the subsoil. The article presents a model of reservoir deformation with a drop in reservoir pressure, describes its numerical implementation, and performs calculations of schemes for typical development conditions.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


2017 ◽  
Vol 476 (2) ◽  
pp. 1120-1124 ◽  
Author(s):  
E. S. Zakirov ◽  
I. M. Indrupskiy ◽  
O. V. Liubimova ◽  
I. M. Shiriaev ◽  
D. P. Anikeev

2021 ◽  
Author(s):  
Uchenna Odi ◽  
Kola Ayeni ◽  
Nouf Alsulaiman ◽  
Karri Reddy ◽  
Kathy Ball ◽  
...  

Abstract There are documented cases of machine learning being applied to different segments of the oil and gas industry with different levels of success. These successes have not been readily transferred to production forecasting for unconventional oil and gas reservoirs because of sparsity of production data at the early stage of production. Sparsity of unconventional production data is a challenge, but transfer learning can mitigate this challenge. Application of machine learning for production forecasting is challenging in areas with insufficient data. Transfer learning makes it possible to carry over the information gathered from well-established areas with rich data to areas with relatively limited data. This study outlines the background theory along with the application of transfer learning in unconventionals to aid in production forecasting. Similarity metrics are utilized in finding candidates for transfer learning by using key drivers for reservoir performance. Key drivers include similar reservoir mechanisms and subsurface structures. After training the model on a related field with rich data, most of the primary parameters learned and stored in a representative machine or deep learning model can be re-used in a transfer learning manner. By employing the already learned basic features, models with sparse data have been enriched by using transfer learning. The approach has been outlined in a stepwise manner with details. With the help of the insights transferred from related sites with rich data, the uncertainty in production forecasting has decreased, and the accuracy of the predictions increased. As a result, the details of selecting a related site to be used for transfer learning along with the challenges and steps in achieving the forecasts have been outlined in detail. There are limited studies in oil and gas literature on transfer learning for oil and gas reservoirs. If applied with care, it is a powerful method for increasing the success of models with sparse data. This study uses transfer learning to encapsulate the basics of the substructure of a well-known area and uses this information to empower the model. This study investigates the application to unconventional shale reservoirs, which have limited studies on transfer learning.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


2005 ◽  
Vol 8 (05) ◽  
pp. 426-436 ◽  
Author(s):  
Hao Cheng ◽  
Arun Kharghoria ◽  
Zhong He ◽  
Akhil Datta-Gupta

Summary We propose a novel approach to history matching finite-difference models that combines the advantages of streamline models with the versatility of finite-difference simulation. Current streamline models are limited in their ability to incorporate complex physical processes and cross-streamline mechanisms in a computationally efficient manner. A unique feature of streamline models is their ability to analytically compute the sensitivity of the production data with respect to reservoir parameters using a single flow simulation. These sensitivities define the relationship between changes in production response because of small changes in reservoir parameters and, thus, form the basis for many history-matching algorithms. In our approach, we use the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. First, the velocity field from the finite-difference model is used to compute streamline trajectories, time of flight, and parameter sensitivities. The sensitivities are then used in an inversion algorithm to update the reservoir model during finite-difference simulation. The use of a finite-difference model allows us to account for detailed process physics and compressibility effects. Although the streamline-derived sensitivities are only approximate, they do not seem to noticeably impact the quality of the match or the efficiency of the approach. For history matching, we use a generalized travel-time inversion (GTTI) that is shown to be robust because of its quasilinear properties and that converges in only a few iterations. The approach is very fast and avoids many of the subjective judgments and time-consuming trial-and-error steps associated with manual history matching. We demonstrate the power and utility of our approach with a synthetic example and two field examples. The first one is from a CO2 pilot area in the Goldsmith San Andreas Unit (GSAU), a dolomite formation in west Texas with more than 20 years of waterflood production history. The second example is from a Middle Eastern reservoir and involves history matching a multimillion-cell geologic model with 16 injectors and 70 producers. The final model preserved all of the prior geologic constraints while matching 30 years of production history. Introduction Geological models derived from static data alone often fail to reproduce the field production history. Reconciling geologic models to the dynamic response of the reservoir is critical to building reliable reservoir models. Classical history-matching procedures whereby reservoir parameters are adjusted manually by trial and error can be tedious and often yield a reservoir description that may not be realistic or consistent with the geologic interpretation. In recent years, several techniques have been developed for integrating production data into reservoir models. Integration of dynamic data typically requires a least-squares-based minimization to match the observed and calculated production response. There are several approaches to such minimization, and these can be classified broadly into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods. The derivative-free approaches, such as simulated annealing or genetic algorithms, require numerous flow simulations and can be computationally prohibitive for field-scale applications. Gradient-based methods have been used widely for automatic history matching, although the convergence rates of these methods are typically slower than the sensitivity-based methods such as the Gauss-Newton or the LSQR method. An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. These sensitivities are simply partial derivatives that define the change in production response because of small changes in reservoir parameters. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of three categories: perturbation method, direct method, and adjoint-state methods. Conceptually, the perturbation approach is the simplest and requires the fewest changes in an existing code. Sensitivities are estimated simply by perturbing the model parameters one at a time by a small amount and then computing the corresponding production response. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, it can be computationally prohibitive for reservoir models with many parameters. In the direct or sensitivity equation method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients. Because there is one equation for each parameter, this approach requires the same amount of work. A variation of this method, called the gradient simulator method, uses the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all the parameters and needs to be decomposed only once. Thus, sensitivity computation for each parameter now requires a matrix/vector multiplication. This method can also be computationally expensive for a large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be quite cumbersome for multiphase-flow applications. Furthermore, the number of adjoint solutions will generally depend on the amount of production data and, thus, the length of the production history.


2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


Sign in / Sign up

Export Citation Format

Share Document