A Practical Data Integration Approach to History Matching: Application to a Deepwater Reservoir

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.

2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


2006 ◽  
Vol 9 (05) ◽  
pp. 502-512 ◽  
Author(s):  
Arne Skorstad ◽  
Odd Kolbjornsen ◽  
Asmund Drottning ◽  
Havar Gjoystdal ◽  
Olaf K. Huseby

Summary Elastic seismic inversion is a tool frequently used in analysis of seismic data. Elastic inversion relies on a simplified seismic model and generally produces 3D cubes for compressional-wave velocity, shear-wave velocity, and density. By applying rock-physics theory, such volumes may be interpreted in terms of lithology and fluid properties. Understanding the robustness of forward and inverse techniques is important when deciding the amount of information carried by seismic data. This paper suggests a simple method to update a reservoir characterization by comparing 4D-seismic data with flow simulations on an existing characterization conditioned on the base-survey data. The ability to use results from a 4D-seismic survey in reservoir characterization depends on several aspects. To investigate this, a loop that performs independent forward seismic modeling and elastic inversion at two time stages has been established. In the workflow, a synthetic reservoir is generated from which data are extracted. The task is to reconstruct the reservoir on the basis of these data. By working on a realistic synthetic reservoir, full knowledge of the reservoir characteristics is achieved. This makes the evaluation of the questions regarding the fundamental dependency between the seismic and petrophysical domains stronger. The synthetic reservoir is an ideal case, where properties are known to an accuracy never achieved in an applied situation. It can therefore be used to investigate the theoretical limitations of the information content in the seismic data. The deviations in water and oil production between the reference and predicted reservoir were significantly decreased by use of 4D-seismic data in addition to the 3D inverted elastic parameters. Introduction It is well known that the information in seismic data is limited by the bandwidth of the seismic signal. 4D seismics give information on the changes between base and monitor surveys and are consequently an important source of information regarding the principal flow in a reservoir. Because of its limited resolution, the presence of a thin thief zone can be observed only as a consequence of flow, and the exact location will not be found directly. This paper addresses the question of how much information there is in the seismic data, and how this information can be used to update the model for petrophysical reservoir parameters. Several methods for incorporating 4D-seismic data in the reservoir-characterization workflow for improving history matching have been proposed earlier. The 4D-seismic data and the corresponding production data are not on the same scale, but they need to be combined. Huang et al. (1997) proposed a simulated annealing method for conditioning these data, while Lumley and Behrens (1997) describe a workflow loop in which the 4D-seismic data are compared with those computed from the reservoir model. Gosselin et al. (2003) give a short overview of the use of 4D-seismic data in reservoir characterization and propose using gradient-based methods for history matching the reservoir model on seismic and production data. Vasco et al. (2004) show that 4D data contain information of large-scale reservoir-permeability variations, and they illustrate this in a Gulf of Mexico example.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 431-442 ◽  
Author(s):  
Xian-Huan Wen ◽  
Wen H. Chen

Summary The ensemble Kalman Filter technique (EnKF) has been reported to be very efficient for real-time updating of reservoir models to match the most current production data. Using EnKF, an ensemble of reservoir models assimilating the most current observations of production data is always available. Thus, the estimations of reservoir model parameters, and their associated uncertainty, as well as the forecasts are always up-to-date. In this paper, we apply the EnKF for continuously updating an ensemble of permeability models to match real-time multiphase production data. We improve the previous EnKF by adding a confirming option (i.e., the flow equations are re-solved from the previous assimilating step to the current step using the updated current permeability models). By doing so, we ensure that the updated static and dynamic parameters are always consistent with the flow equations at the current step. However, it also creates some inconsistency between the static and dynamic parameters at the previous step where the confirming starts. Nevertheless, we show that, with the confirming approach, the filter shows better performance for the particular example investigated. We also investigate the sensitivity of using a different number of realizations in the EnKF. Our results show that a relatively large number of realizations are needed to obtain stable results, particularly for the reliable assessment of uncertainty. The sensitivity of using different covariance functions is also investigated. The efficiency and robustness of the EnKF is demonstrated using an example. By assimilating more production data, new features of heterogeneity in the reservoir model can be revealed with reduced uncertainty, resulting in more accurate predictions of reservoir production. Introduction The reliability of reservoir models could increase as more data are included in their construction. Traditionally, static (hard and soft) data, such as geological, geophysical, and well log/core data are incorporated into reservoir geological models through conditional geostatistical simulation (Deutsch and Journel 1998). Dynamic production data, such as historical measurements of reservoir production, account for the majority of reservoir data collected during the production phase. These data are directly related to the recovery process and to the response variables that form the basis for reservoir management decisions. Incorporation of dynamic data is typically done through a history-matching process. Traditionally, history matching adjusts model variables (such as permeability, porosity, and transmissibility) so that the flow simulation results using the adjusted parameters match the observations. It usually requires repeated flow simulations. Both manual and (semi-) automatic history-matching processes are available in the industry (Chen et al. 1974; He et al. 1996; Landa and Horne 1997; Milliken and Emanuel 1998; Vasco et al. 1998; Wen et al. 1998a, 1998b; Roggero and Hu 1998; Agarwal and Blunt 2003; Caers 2003; Cheng et al. 2004). Automatic history matching is usually formulated in the form of a minimization problem in which the mismatch between measurements and computed values is minimized (Tarantola 1987; Sun 1994). Gradient-based methods are widely employed for such minimization problems, which require the computation of sensitivity coefficients (Li et al. 2003; Wen et al. 2003; Gao and Reynolds 2006). In the recent decade, automatic history matching has been a very active research area with significant progress reported (Cheng et al. 2004; Gao and Reynolds 2006; Wen et al. 1997). However, most approaches are either limited to small and simple reservoir models or are computationally too intensive for practical applications. Under the framework of traditional history matching, the assessment of uncertainty is usually through a repeated history-matching process with different initial models, which makes the process even more CPU-demanding. In addition, the traditional history-matching methods are not designed in such a fashion that allows for continuous model updating. When new production data are available and are required to be incorporated, the history-matching process has to be repeated using all measured data. These limit the efficiency and applicability of the traditional automatic history-matching techniques.


2021 ◽  
Author(s):  
Usman Aslam ◽  
Luis Hernando Perez Cardenas ◽  
Andrey Klimushin

Abstract The Internet of Things has popularized the notion of a digital twin - a virtual representation of a physical system. There are substantial risks associated with designing a development plan for an oilfield and the industry has been making use of reservoir models - digital twins - to improve the decision-making process for many years. With an increase in the availability of computational resources, the industry is moving towards ensemble-based workflows to estimate risk in field development plans. In this paper, we demonstrate the use of an integrated ensemble-based approach to assess uncertainties in the reservoir models and quantify their impact on the decision-making process. An important feature of a digital twin is its ability to use sensor data to update the virtual model, more commonly known as history matching or data assimilation. We demonstrate how production data can be used to identify and constrain the uncertainties in the reservoir model. Production data is incorporated using Bayesian statistics and state-of-the-art supervised machine learning techniques to create an ensemble of models that capture the range of uncertainties in the reservoir model. This ensemble of calibrated models with an improved predictive ability provides a realistic assessment of the uncertainty associated with production forecasts. The ensemble-based approach is demonstrated through its application on an offshore oilfield located in the North Sea. The field is highly compartmentalized and has high structural uncertainty following the interpretation and depth conversion. An integrated cross-domain model is set up to incorporate typically ignored structural uncertainty in addition to the uncertainties and their dependencies in the dynamic parameters, including fault transmissibility, pore-volume, fluid contacts, saturation, and relative permeability endpoints, etc. Results from the history matched ensemble of models show a significa nt reduction in uncertainty in these parameters and the predicted production. An advantage of the proposed technique is that the automated, repeatable, and auditable ensemble-based workflow can assimilate the newly acquired measured data into the reservoir model at any time, keeping the model up-to-date and evergreen.


2002 ◽  
Vol 5 (03) ◽  
pp. 255-265 ◽  
Author(s):  
X.-H. Wen ◽  
T.T. Tran ◽  
R.A. Behrens ◽  
J.J. Gomez-Hernandez

Summary The stochastic inversion of spatial distribution of lithofacies from multiphase production data is a difficult problem. This is true even for the simplest case, addressed here, of a sand/shale distribution and under the assumption that reservoir properties are constant within each lithofacies. Two geostatistically based inverse techniques, sequential self-calibration (SSC) and GeoMorphing (GM), are extended for such purposes and then compared with synthetic reference fields. The extension of both techniques is based on the one-to-one relationship existing between lithofacies and Gaussian deviates in truncated Gaussian simulation. Both techniques attempt to modify the field of Gaussian deviates while maintaining the truncation threshold field through an optimization procedure. Maintaining a fixed threshold field, which has been computed previously on the basis of prior lithofacies proportion data, well data, and other static soft data, guarantees preservation of the initial geostatistical structure. Comparisons of the two techniques using 2D and 3D synthetic data show that the SSC is very efficient in producing sand/shale realizations matching production data and reproducing the large-scale patterns displayed in the reference fields, although it has difficulty in reproducing small-scale features. GM is a simpler algorithm than SSC, but it is computationally more intensive and has difficulty in matching complex production data. Better results could be obtained with a combination of the two techniques in which SSC is used to generate realizations identifying large-scale features; then, these realizations could be used as input to GM for a final update to match small-scale details. Introduction Reliable predictions of future reservoir performance require reservoir models that incorporate all available relevant information. Geostatistical methods are widely used and well suited to construct reservoir models of porosity and permeability honoring static data, such as core data, well-log data, seismic data, and geological conceptual data. Dynamic production data, such as production rate, pressure, water cut, and gas/oil ratio (GOR), have been largely overlooked for constraining geostatistical models because of the complication and difficulty of integrating them. Traditional geostatistical methods for integrating static data are not well suited for integrating dynamic data because dynamic data are nonlinearly related to reservoir properties through the flow equations. Typically, an inverse technique is needed for such integration, in which the flow equations must be solved many times within a nonlinear optimization procedure. In recent years, a number of inverse techniques have been developed and shown capable of preconstraining geostatistical models before they go to the manual history matching phase. Ref. 1 provides a review of these inverse techniques. Two geostatistically based approaches that have shown great potential for the integration of dynamic data are SSC and GM. The SSC method iteratively perturbs the given reservoir model at each gridblock to match the production data while preserving the geostatistical features and static hard/soft data conditioning.2–6 The perturbation is computed through an optimization procedure after a parameterization of the optimization problem with a reduced number of parameters that requires the computation of sensitivity coefficients. The reduced number of parameters to optimize and a fast calculation of the sensitivity coefficients make the inversion computationally feasible. Multiple realizations of the reservoir model can be produced, from which uncertainty can be assessed. Applications of the SSC method to invert permeability distribution from single-phase and multiphase production data have shown their efficiency and robustness.3–6 In this paper, we extend the SSC method to invert lithofacies distributions from production data within the framework of truncated Gaussian simulation. We limit ourselves to sand/shale reservoirs in which permeability is assumed constant within each facies. GM is an evolution and extension of the Gradual Deformation method.7–9 This method generates realizations of reservoir models by an iterative procedure in which, at each iteration, unconditional realizations are linearly and optimally combined into a new realization with a better reproduction of the production data than any other members of the linear combination. Because the linear combination of a few realizations depends only on a few parameters, the optimization procedure is very easy to implement. Our GM algorithm follows the modification of the gradual deformation algorithm by Ying and Gómez-Hernández10 to honor the well data while preserving the permeability variogram. Our modification here is aimed at inverting a lithofacies distribution from production data within the framework of truncated Gaussian simulation. Comparisons of these two methods in generating multiple geostatistical sand/shale reservoir models that honor dynamic production data are made by using both 2D and 3D synthetic data sets. The comparison of the results against the reference models provides direct assessment of the two methods. A thorough comparison of the two methods is made in terms of reproduction of reservoir spatial patterns, matching of production data, implementation issues, feasibility, CPU time, and generality. We also discuss briefly the possible combination of the strength of the two methods to achieve better, more efficient integration of production data. In the following sections, we first recall the methodology of truncated Gaussian simulation to construct a categorical type of reservoir model; then, the SSC and GM methods are presented under the framework of truncated Gaussian simulation to invert lithofacies distributions. Applications of the two methods to invert sand/shale distributions in 2D and 3D reservoir models are made using synthetic data sets, with emphasis on the comparisons of the strengths and weaknesses of the two methods. The production data considered in this paper are fractional-flow rates (water cut) at production wells and water-saturation spatial distribution at a given time in two-phase-flow (oil/water) reservoirs.


2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.


2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.


Sign in / Sign up

Export Citation Format

Share Document