You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information

Message: PetroWiki content is moving to OnePetro! Please note that all projects need to be complete by November 1, 2024, to ensure a smooth transition. Online editing will be turned off on this date.


Geostatistical conditional simulation

PetroWiki
Jump to navigation Jump to search

Geostatistical simulation is well accepted in the petroleum industry as a method for characterizing heterogeneous reservoirs. It often is preferable to traditional interpolation approaches, in part because it captures the heterogeneous character observed in many petroleum reservoirs and provides more accurate hydrocarbon reserve estimates. Geostatistical simulation methods preserve the variance observed in the data, instead of just the mean value, as in interpolation. Their stochastic approach allows calculation of many equally probable solutions (realizations), which can be post-processed to quantify and assess uncertainty.

Use of stochastic approach

Many practitioners are suspicious of stochastic methods—and even reject them outright—because natural processes that form reservoirs are not random. But geostatistical stochastic simulation is not a coin-toss experiment. Furthermore, while it is true that reservoirs are not products of random processes, it also is true that they have attributes that cause them to behave as if they were random. For example, physical and chemical processes modify reservoir characteristics from their original state, confounding our ability to make predictions even when we understand the processes. Such changes cause behavior that can be captured by stochastic principles. [1] [2] [3]

Kriging is a deterministic method whose function has a unique solution and does not attempt to represent the actual variability of the studied attribute. The smoothing property of any interpolation algorithm replaces local detail with a good average value; however, the geologist and reservoir engineer are more interested in finer-scaled details of reservoir heterogeneity than in a map of local estimates of the mean value. Like the traditional deterministic approach, stochastic methods preserve hard data where known and soft data where informative. Unlike the deterministic approach, though, it provides geoscientists and reservoir engineers with many realizations. The kriged solution is the average of numerous realizations, and the variability in the different outcomes is a measure of uncertainty at any location. Thus, the standard deviation of all values simulated at each grid node is the quantification of uncertainty.[2] [3]

What do we want from a simulation?

Which simulation method we choose depends on what we want from a stochastic-modeling effort and—to great extent—the types of available data. Not all conditional simulation studies need a “Cadillac” method. For many, a “Volkswagen” serves the purpose well. Among the reasons for performing stochastic simulation, four important ones are:

  1. To capture heterogeneity
  2. To simulate facies or petrophysical properties, or both
  3. To honor and integrate multiple data types
  4. To quantify and assess uncertainty

Principles of stochastic modeling

In general, conditional simulation requires that the basic input parameters—the spatial model (variograms) and the distribution of sample values (cumulative distribution function, or cdf)—remain constant within a given geologic interval and/or facies, from realization to realization. Typically, the structural and stratigraphic model (major structural surfaces and the discretized layers between them) remains fixed. Because each realization begins with a different, random seed number, each has a unique “random walk,” or navigational path through the 3D volume. The random walk provides the simulation algorithm with the order of cells to be simulated, and is different from realization to realization; therefore, the results are different at unsampled locations, producing local changes in the distribution of facies and petrophysical properties in the interwell space. Note that selection of the same random seed always will reproduce the same random walk. This characteristic is for computational convenience. In practice, multiple realizations are performed at or close to the geologic scale, and not necessarily at the flow-simulation scale.

There are two basic categories of conditional simulation methods:

  • Pixel-based methods operate on one pixel at a time. They can be used with either continuous or categorical data.
  • Object-based methods operate on groups of pixels that are connected and arranged to represent genetic shapes of geologic features. They are used only with categorical data.

Table 1 is summary description on conditional simulation

Pixel based model

A pixel-based model assumes that the variable to be simulated is a realization of a continuous (Gaussian) random function. Using the spatial model, search ellipse, and control data, a pixel-based method simulates values grid node by grid node. Some of the most popular pixel-based algorithms are: turning bands, sequential Gaussian, sequential indicator, truncated Gaussian, and simulated annealing. Each method can produce a range of realizations that capture the uncertainty of an regionalized variable (RV), and so the method choice here will be based on the goals and on data types and their availability. The pixel-based method works best in the presence of facies associations that vary smoothly across the reservoir, as often is the case in deltaic or shallow marine reservoirs. No assumption is made about the shape of the sedimentary bodies. This method is preferred when the net-to-gross ratio is high.[4]

Object based (Boolean) model

The algorithms for a Boolean model generate spatial distributions of sedimentary bodies (channels, crevasse splays, reefs, etc.) whose parameters (orientation, sinuosity, length, width, etc.) can be inferred from the assumed depositional model, seismic data, outcrops, and even well-test interpretations. The object-based method simulates many grid nodes at one time, superimposing geometries (e.g., sheets, discs, or sinusoids) onto a background that typically is a shaly lithofacies. The method used for object modeling is referred to as the marked-point process. [5] [6] [7] This method generally works best with a low net-to-gross ratio and widely spaced wells.[4]

Selecting the right method

It is difficult to say a priori which type of method, pixel- or object-based, is best. Although we have observed that it is common for practitioners to have a personal bias toward one method or the other, the basis for such preference often is not well founded. For example, method selection often is based on familiarity with the procedure or what is available in a given software package. Additionally, we have observed that geologists tend to prefer object-based methods because they often produce realizations that appear “crisp” and realistic (e.g., deltas look like deltas and channels look like channels). Engineers tend toward pixel-based methods because they require less descriptive input and often are computationally faster than object-based methods. In fact, both methods are computationally sound and offer unique characteristics.

From a practical point of view, the methods can be combined to achieve an effective model. For example, the practitioner could model a transition sequence from offshore to onshore with a pixel-based method, and then superimpose a channel system from a delta using an object-based method.

To help in selecting the appropriate methods, we can offer the following for consideration:

  1. Pixel-based methods are more forgiving in that they require fewer assumptions about the data. As such, the error variance generated from a set of realizations generally will be higher than with object-based modeling. We surmise that pixel-based models create a larger space of uncertainty and therefore are more likely to “capture” the correct solution, even if the initial conceptual geologic model is incorrect.
  2. Object-based models work best when the data density and net-to-gross ratio are low. A set of object realizations will generate a lower error variance than that from a pixel-based model, and thus can be said to have a smaller space of uncertainty. When the conceptual geologic model is strongly supported by the data and is well understood, the method is highly successful; however, because more assumptions about the data are required, the resultant realizations are less forgiving (i.e., if the original conceptual model is wrong, there is little chance of it successfully capturing the correct solution).

Stochastic simulation methods

There are several theoretically sound and practically tested conditional-simulation approaches, and choosing one can be bewildering and daunting for a novice to stochastic methods. parametric-simulation techniques assume that the data have a Gaussian distribution, so that a transform of the data typically is prerequisite. Note that indicator data do not undergo such a transform. Furthermore, data transformations are not required in the object-based method, where only indicator data are required. The steps of parametric simulation are:

  • Perform a normal-score data transform from the z -space to the y -space.
  • Compute and model the variogram (covariance) on the y- normal scores.
  • Perform multiple simulations of the y- normal scores on a grid or within a volume.
  • Back-transform the simulated y- normal scores to the simulated z -values.
  • Post-process the multiple simulations to assess uncertainty.

Turning bands simulation

In turning bands simulation (one of the earliest simulation methods), first the data are kriged and then unconditional simulations are created using a set of randomly distributed bands, or lines. The general procedure is as follows:

  1. Raw data values are kriged to a regular grid.
  2. Numerous random lines (bands) with various azimuths are generated around a centroid located at the grid or volume center. The modeler controls the number of lines.
  3. Unconditional simulations of normal-score transformed data are performed along each line using the transformed-data histogram and variogram.
  4. Values along the lines are linearly interpolated to grid nodes—the more lines, the less interpolation.
  5. Unconditional interpolated values are back interpolated to well locations.
  6. Unconditional interpolated values and well locations are kriged.
  7. The grid of unconditional interpolated values is subtracted from the grid of unconditional results in step 5. This creates a residual map with a value of zero at the well locations.
  8. The residuals are back interpolated from the y-space to the z-space.
  9. The back-interpolated residuals from step 8 are added to the original kriged map from step 1.
  10. The result is a grid or volume of data values that reproduce both the mean of the raw data and the variance.

For a more complete explanation, see Mantoglou and Wilson.[8]

Sequential simulation

Three sequential-simulation procedures use the same basic algorithm for different data types:

  • Sequential Gaussian simulation (SGS) simulates continuous variables, such as petrophysical properties
  • Sequential indicator simulation (SIS) simulates discrete variables, using SGS methodology to create a grid of zeros and ones
  • Bayesian indicator simulation (a newer form of SIS) allows direct integration of seismic attributes, and uses a combination of classification and indicator methods.

As described in the literature, the general sequential simulation process is[9] [10]:

  1. Perform a normal-score transformation of the raw data.
  2. Randomly select a node that is not yet simulated in the grid.
  3. Estimate the local conditional probability distribution function (lcpd) for the residuals at the selected node. The residuals can be calculated by subtracting the grid of an unconditional simulation from a kriged grid of the unconditional values sampled at the geographic coordinates of the wells.
  4. Create a newly simulated value by adding together the randomly drawn residual value and the mean of the transformed data.
  5. Include the newly simulated value in the set of conditioning data, within a specified radius of the new target location. This ensures that closely spaced values have the correct short-scale correlation.
  6. Repeat until all grid nodes have a simulated value.

As with turning-bands simulation, each time a new random walk is defined, a new and different result will occur. In this case, though, the lcpd is updated continually by the previously simulated values.

Truncated gaussian simulation

Developed by Institut Français du Pétrole and the Centré de Géostatistiques,[11] [12] [13] the truncated Gaussian algorithm simulates lithofacies directly by using a set of cutoffs that partition Gaussian field. The cutoffs commonly are generated from facies proportions calculated from well data. One simple method for doing this is by calculating the vertical proportion curve. A vertical proportion curve is a stacked bar diagram that represents the percentage of all facies in all wells in the study area. The proportion of each facies is calculated layer by layer, where a layer is a subdivision of the reservoir unit being modeled. Truncated Gaussian simulation involves first generating a continuous variable, and then applying cutoffs (termed the Gaussian thresholds) during the simulation. This method works exceptionally well with transitional facies, such as those from foreshore to upper shoreface to lower shoreface to offshore.

Simulated annealing simulation

The simulated-annealing simulation method is borrowed from metallurgy. In metallurgy, when fusing two pieces of metal, the attachment zone is heated to a temperature at which the molecular structure can be rearranged. As the metal cools again, the molecular structure changes and a bond forms where the two pieces of metal are joined. In transferring this idea to stochastic modeling, one produces an initial realization, introduces some particular conditions (new pieces of “metal” to be fused), then “heats” and “cools” it to rearrange the pixels (or objects) to match (band) the particular conditions introduced.

The simulated-annealing simulation method constructs the reservoir model through an iterative, trial-and-error process, and does not use an explicit random-function model. It can be used as the basis for both pixel- and object-based simulation and in either case the simulated image is formulated as an optimization process.[14] [3] For example, our desired result might be an image of a sand/shale model with a 70% net-to-gross ratio, an average shale length of 60 m, and average shale thicknesses of 10 m. The starting image has pixels (objects) arranged randomly, with the correct global proportion of sand and shale, but with an incorrect net-to-gross relationship that stems from the completely random assignment of the sand and shale. In addition, the average shale lengths and widths are not correct. During the computation, the annealing algorithm modifies the initial image by swapping information from node to node, and determining whether or not an individual swap improves the realization. This method allows some swap “improvements” to be rejected to prevent the occurrence of a “local minimum,” a well-known problem with annealing techniques. The swapping process continues until a final image is produced that matches the statistics of the input data.

The simulated-annealing process produces excellent results, but can be inefficient because millions of perturbations may be required to arrive at the desired image. Nevertheless, the availability of faster computers with more memory are making simulated-annealing simulation methods more attractive.[14] [3] They are particularly desirable for integrating dynamic data, such as production histories and well tests, to ensure history matching from any realization.

Boolean simulation methods create reservoir models based on objects that have a genetic significance, rather than building up models one elementary node or pixel at a time. [14] [5] [6] [7] [15] [16] To use Boolean methods, select a basic shape for each depositional facies that describes its geometry. For example, you might model channels that look sinuous in map view and half-elliptical in cross section, or deltas that look like triangular wedges in map view. The modeler must specify the proportions of shapes in the final model and choose parameters that describe the shapes. Some algorithms have rules describing how geologic bodies are positioned relative to each other. For example, can the objects cross each other like braided streams, or attach like splays and channels? Do objects repulse or attract, or must there be a minimum distance between the shapes?

Once the shape distribution parameters and position rules are chosen, the six-step simulation is performed:

  1. Fill the reservoir model background with a lithofacies, such as shale.
  2. Randomly select a starting point in the model.
  3. Randomly select one lithofacies shape and draw it according to the shape, size, and orientation rules.
  4. Check to see whether the shape conflicts with any control data or with previously simulated shapes. If it does, reject it and go back to step 3.
  5. Check to see whether the global lithofacies proportions are correct. If they are not, return to step 2.
  6. Use pixel-based methods to simulate petrophysical properties within the geologic bodies.

If control data must be honored, objects are simulated at control points first, before simulating the interwell region. Be sure that there are no conflicts with known stratigraphic and lithologic sequences at the well locations.

Boolean techniques currently are of interest to the petroleum industry, and geologists find object modeling particularly satisfactory because the shapes created are based on geometries and facies relationships that have been measured, and because the images look geologically realistic. Criticisms of Boolean modeling are that it requires a large number of input parameters and that prior knowledge is needed to select the parameter values. Furthermore, in the past, Boolean-type algorithms could not always honor all the control data because the algorithms are not strict simulators of shape, and often require changes to the facies proportions of object sizes to complete a given realization; however, new technology has greatly alleviated this problem. A number of research/academic institutions and vendors continue to research better ways to implement these algorithms.

Uncertainty quantification and assessment

As discussed earlier, stochastic models allow quantification of uncertainty related to the geologic description. An infinite number of possible realizations are obtained simply by modifying the seed number. Comparing a sufficiently large number of these realizations then will provide a measurement of the uncertainty associated with the assumed geologic model.

Since the late 1990s, real modeling experiences have generated questions concerning the workflow, aimed at capturing and developing a working definition for uncertainty. Currently, for example, the accepted implementation of stochastic modeling involves four general steps:

  • Produce multiple realizations of the fine-scaled model by changing the seed number
  • Rank the results on the basis of some criteria
  • Upscale the P10, P50, and P90 results
  • Flow-simulate the above three solutions to capture the range of the uncertainty

One common criticism of the above workflow is that the actual space of uncertainty is much larger than that explored by the variability of the random function. This concept often is overlooked; we tend to identify uncertainty on the basis of stochastic simulations that fix all input parameters, and simply to change the seed value from simulation to simulation. Our focus so far has been on understanding the random function and the uncertainty around it; now we will turn our attention to other important uncertainty around it, we now turn our attention to other important uncertainties that deal with the geologic model. There are at least five sources of uncertainty in a typical reservoir model[14] [2] [4] [17] [18] [19] [20]:

  1. Uncertainty in data quality and interpretation—Basic data-measurement errors generally are ignored and the data are treated as error-free when modeling the reservoir. The same holds true for interpreted data, such as potential errors in picking seismic time horizons.
  2. Uncertainty in the structural model—The structural model virtually always is created using a deterministic approach (e.g., when seismic time horizons are converted to depth using an uncertain velocity model, then treated as fixed surfaces in the geologic model.) The structural model is one of the largest sources of uncertainty, and greatly affects volumetric calculations.
  3. Uncertainty in the stratigraphic model—Uncertainty in the stratigraphic model is related to the reliability of sequence determination and correlation through the wells.
  4. Uncertainty in the stochastic model choice and its parameters—If the same geologic sequence were modeled using different algorithms, each stochastic simulation method would yield different results and explore a different part of the space of uncertainty; however, the space sampled by the different algorithms would overlap significantly.
  5. Uncertainty from multiple realizations—The uncertainty reported in most stochastic-modeling studies usually is from multiple realizations. Compared to the other sources for error, it often is small (but not always).

The above sources of uncertainty are listed in decreasing order of significance. It should be somewhat intuitive that change in the data, structural model, or sequence stratigraphic model is likely to have a greater impact on the reservoir model than will changing a single parameter of a given random function; however, it is important to understand that when building a geologic model, all levels of uncertainty must be accounted for to achieve an accurate assessment of the space of uncertainty. The next section further defines the order of uncertainty and the relationship between scenarios and realizations.

Realizations, scenarios, and the space of uncertainty

To account for the different sources of uncertainty, we can classify uncertainty into three orders on the basis of the degree of impact on the reservoir model.

  • First-order uncertainty stems from major changes in modeling assumptions, such as changing the data or the structural model, testing of different depositional models, or changing the petrophysical model.
  • Second-order uncertainty is caused by small changes to the parameter of the random function (e.g., the sill, range, or model of the variogram), to the cumulative distribution funtions (cdfs), or to the stochastic-model choice.
  • Third-order uncertainty results from changes only in the interwell space that are caused by the algorithm selected with its parameterization, and by changing the seed number from realization to realization. (parameters that control first- and second-order uncertainties remain fixed.) We refer to the larger changes in uncertainty that are represented by the first and second order as scenarios, whereas we call the third order changes realizations.

Quite often, scenario modeling plays a key role early in the field development phase, when data are sparse and operators hold fundamental differences of opinion about the appropriate conceptual depositional model to use. For example, they might differ on whether the system is wave-dominated fluvial deltaic or tidal-dominated. Although both are deltaic, the overprint of the oceanic processes changes the strike of dominant sand bodies, making it either parallel to the coast (wave-dominated) or perpendicular to the coast (tidal-dominated). Note that each scenario model will require its own set of realizations. Thus, measuring the total space of uncertainty will require that multiple scenarios and their respective realizations be constructed and their ranges of uncertainty be pooled together. This may cause a great deal of computational effort, but it often is a mistake to assume that by modeling the scenarios without their respective realizations, the critical spectrum of uncertainty will be captured.

Static displays of uncertainty

The most common way to visualize uncertainty is as a static view, through summary statistics maps prepared from a suite of realizations. Several types of displays are conventionally are used for this purpose. These include:

  • Maps of mean and median
  • Maps of spread
  • Uncertainty/probability/risk maps
  • Isoprobability maps
  • Display of multiple realizations

Maps of mean and median

Mean and median maps are based on the average and median of a given number of conditional simulations. At each cell, the program computes the average or median value for the values at that location from all simulations. When the number of simulations is large, the map converges to the kriged solution. Fig. 1 shows the mean and median maps computed from 200 sequential Gaussian simulations (200 SGS) of net pay. Mean and median maps are identical when they are based on the statistics of an infinite number of simulations.

Maps of spread

Spread is most commonly displayed as a map of the standard deviation (error) at each grid cell, computed from all input maps (Fig. 2).

Maps of uncertainty, probability, and risk

The probability of meeting or exceeding a user-specified threshold at each grid cell is displayed using maps of uncertainty, probability, and risk, in which grid-cell values range from 0 to 100%. Fig. 3 illustrates schematically how such a map is generated during post-processing. Such maps are used to assess the risk (uncertainty) on the basis of an economic criterion. For example, we might determine from recent drilling whether a well is commercial or noncommercial, on the basis of the probability of encountering 8 m of net pay. In Fig. 3, the vertical straight line at 8 m represents the threshold, whereas the left-hand and right-hand curved lines represent the probability distributions of values simulated at two grid nodes for proposed well locations. The left-hand curve shows only a 35% chance of encountering 8 m or more of net pay at its well location, but the right-hand curve shows its location has a 75% chance of meeting this economic criterion. During post-processing, the modeler fixes the threshold, and the program computes the probability of meeting or exceeding it. Fig. 4 shows risk maps for thresholds of 8 m and 16 m of net pay. Such maps are very useful for identifying well locations and probable nonreservoir areas.

Isoprobability maps

Rather than holding the threshold constant, sometimes it is preferable to freeze the probability value and create maps that correspond to local quantiles or percentiles. Such maps are known as isoprobability maps, and their grid-cell values relate to the attribute, rather than to probability. Fig. 5 shows a probability plot, from which isoprobability maps are created. In this example, the uncertainty assessment involves modeling net pay. An isoprobability map created at the tenth percentile shows that net pay has a 10% chance of being thick, because the modeled variable is meters; this represents a pessimistic view of the hydrocarbon potential. Conversely, a ninetieth percentile is an optimistic picture of the hydrocarbon potential, showing that there is only a 10% chance that the net pay is very thin. Fig. 6 shows isoprobability maps of net pay created for P10 and P90.

Multiple realizations

Another common format for illustrating uncertainty is simply to display several possible realizations that as a group represent the broad spectrum of outcomes (Fig. 7).

Dynamic displays of uncertainty

In his discussions on the use of animated (dynamic) displays of the realizations, Srivastava [21] emphasizes that, like a well-produced animated cartoon, a visually effective dynamic display of uncertainty should present the realizations in a gradual and logically successive and informative way (i.e., not simply in random succession). The key to a successful dynamic rendering of uncertainty is finding a way to show all the realizations. Separate displays of realizations would be a step in the right direction, but workstation screen space is limited, as is the patience of the modeler scrolling through the images. The scrolling process could be automated, presenting the realizations one at a time in rapid succession, but successive realizations might differ considerably in local detail, and create a flickering display that would be more irritating than illuminating.[21]

To make an appealing dynamic display, each realization must be treated as if it were a frame from a movie clip, with each successive frame showing minimal difference from the last, so that the brain interprets the minor transitions as gradual movement. Srivastava[20] suggests that it is possible to create an acceptable animation with as few as 10 frames/second.

For example, were we to create realizations of a structural top and make an animated display, the movie clip might show a perpetually undulating dome-shaped feature. The surface would appear fixed at well locations, and thus stable. Moving away from the well control, the surface would appear to swell and heave; small depressions would grow into large, deep holes and valleys. Small bumps would grow into hills and then into larger hills with ridges, and then would shrink again. In several seconds of animation, the modeler could see several hundred realizations, and the variability (uncertainty) in the suite of realizations would show in the magnitude of changes over time. The modeler instantly would recognize areas that show little movement (more certainty) vs. those that wax and wane (greater uncertainty).

In making dynamic displays, though, one must overcome the fact that most geostatistical simulation algorithms have no way to produce a series of realizations that show minimal differences, which animation requires. Recall that to generate a new simulated result, we select a new seed number for the random number generator and rerun the program. But any change in the seed number, however small, produces unpredictable changes in the appearance of the resulting simulation, which means that consecutive seed numbers, for example, could produce simulations that are quite different in their local variability. There is no way, then, to predict which seed numbers will produce similar-enough realizations to construct consecutive frames for the animation. One way we overcome this problem is with a simulation technique known as probability field simulation (P-field simulation), although this technique has its own advantages and disadvantages.

P-field simulation

P-field simulation is a conditional simulation technique developed by Froidevaux[22] and Srivastava.[23] The advantage of P-field simulation is that it is ideally suited to the problem of uncertainty animation. It sets up a matrix of probabilities with dimensions that are identical to the 2D or 3D project grid. The spatial model controls the pattern of the probabilities on the matrix; that is, a value of high probably most likely will be adjacent to another high value, and such values could be arranged along the direction of continuity if the variogram is anisotropic. To generate a new realization, one only needs to shift the values in the probability matrix by one row or one column. It is not necessary to generate a new random seed and a subsequent random walk. The result is a small, incremental change from the previous realization. Interestingly, any conditional simulation method that uses unconditional simulation as an intermediate step can be configured to produce a set of ordered realizations that show a progression of small changes from one to the next and that can be animated.[23]

If given viable techniques, geoscientists and reservoir engineers will be able to design and plan more at the workstation. Consider how much more effective a reservoir-development team could be using such techniques—creating a dynamic display for a well in the thickest part of the reservoir, for example, and discovering from the animation that there exists no predictably economically viable net pay. Such techniques could bring reality to the reservoir-modeling process and ensure that planning does not take place on a single, arbitrary model that cannot duplicate reality.[21]

Summary of conditional simulation

Table 1 summarizes aspects of conditional simulation.

References

  1. Isaaks, E.H. and Srivastava, R.M. 1989. An Introduction to Applied Geostatistics. Oxford, UK: Oxford University Press.
  2. 2.0 2.1 2.2 Dubrule, O. 1998. Geostatistics in Petroleum Geology, AAPG Course Note Series, AAPG, Tulsa, 38, 52.
  3. 3.0 3.1 3.2 3.3 Chambers, R.L., Yarus, J.M., and Hird, K.B. 2000. Petroleum Geostatistics for the Nongeostatistician—Part 2. The Leading Edge 19 (6): 592-599. http://dx.doi.org/10.1190/​1.1438664
  4. 4.0 4.1 4.2 Cosentino, L. 2001. Integrated Reservoir Studies. Paris, France: Institut Français du Pétrole Publications, Editions Technip.
  5. 5.0 5.1 Haldorsen, H.H. and Damsleth, E. 1990. Stochastic Modeling (includes associated papers 21255 and 21299 ). J Pet Technol 42 (4): 404-412. SPE-20321-PA. http://dx.doi.org/10.2118/20321-PA
  6. 6.0 6.1 Tyler, K., Henriquez, A., and Svanes, T. 1994. Modeling Heterogeneities in Fluvial Domains: A Review of the Influences on Production Profiles. Stochastic Modeling and Geostatistics, 3, 77-90, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG
  7. 7.0 7.1 Hatløy, A.S. 1994. Numerical Modeling Combining Deterministic and Stochastic Methods. Stochastic Modeling and Geostatistics, J.M. Yarus and R.L. Chambers (eds.), AAPG Computer Applications in Geology, AAPG, Tulsa (1994) 3, 109–120.
  8. Mantoglou, A. and Wilson, J.W. 1982. The Turning Bands Methods for Simulation of Random Fields Using Line Generation by a Spectral Method. Water Research 18 (5): 1379.
  9. Deutsch, C.V. and Journel, A.G. 1998. GSLIB: Geostatistical Software Library and User’s Guide, second edition. Oxford, UK: Oxford University Press.
  10. Deutsch, C.V. 2002. Geostatistics Reservoir Modeling. Oxford, UK: Oxford University Press.
  11. Ravenne, C. and Beucher, H. 1988. Recent Development in Description of Sedimentary Bodies in a Fluvio Deltaic Reservoir and Their 3D Conditional Simulations. Presented at the SPE Annual Technical Conference and Exhibition, Houston, Texas, 2-5 October 1988. SPE-18310-MS. http://dx.doi.org/10.2118/18310-MS
  12. Mathieu, G. et al. 1993. Reservoir Heterogeneity in Fluviatile Keuper Facies: A Subsurface and Outcrop Study. Subsurface Reservoir Characterization from Outcrop Observations, 145-160, ed. R. Eschard and B. Doligez. Paris, France: Technip Publication.
  13. Matheron, G., Beucher, H., de Fouquet, C. et al. 1987. Conditional Simulation of the Geometry of Fluvio-Deltaic Reservoirs. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, Texas, 27–30 September. SPE-16753-MS. http://dx.doi.org/10.2118/16753-MS
  14. 14.0 14.1 14.2 14.3 Srinivasan, S. and Caers, J. 2000. Conditioning reservoir models to dynamic data - A forward modeling perspective. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, Texas, 1-4 October 2000. SPE-62941-MS. http://dx.doi.org/10.2118/62941-MS
  15. Dubrule. O. 1989. A Review of Stochastic Models for Petroleum Reservoir. Geostatistics, 493-506, ed. M. Armstrong. Amsterdam, The Netherlands: Kluwer Publishers.
  16. Wang, J. and MacDonald, A.C. 1997. Modeling Channel Architecture in a Densely Drilled Oilfield in East China. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38678-MS. http://dx.doi.org/10.2118/38678-MS.
  17. Goovaerts, P. 1999. Impact of the Simulation Algorithm, Magnitude of Ergodic Fluctuations and Number of Realizations on the Space of Uncertainty of Flow Properties. Stochastic Environmental Research and Risk Assessment 13 (3): 161.
  18. Goovaerts, P. 2006. Geostatistical Modeling of the Spaces of Local, Spatial, and Response Uncertainty for Continuous Petrophysical Properties. Stochastic Modeling and Geostatistics: Principles, Methods, and Case Studies, Vol. II, ed. T.C. Coburn, J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
  19. Journel, A.G. and Ying, Z. 2001. The Theoretical Links Between Sequential Gaussian, Gaussian Truncated Simulation, and Probability Field Simulation. Mathematical Geology 33: 31.
  20. 20.0 20.1 Wingle, W.L. and Poeter, E.P. 1993. Uncertainty Associated with Semi Variograms Used for Site Simulation. Ground Water 31: 725.
  21. 21.0 21.1 21.2 Srivastava, R.M. 1995. The Visualization of Spatial Uncertainty. Stochastic Modeling and Geostatistics, 3, 339-346, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
  22. Froidevaux, R. 1992. Probability Field Simulation, 73, ed. A. Soares. Geostatistics Troia 1992, Proceedings of the Fourth Geostatistics Congress Dordrecht, The Netherlands: Kluwer Academic Publishers.
  23. 23.0 23.1 Srivastava, R.M. 1992. Reservoir Characterization With Probability Field Simulation. Presented at the SPE Annual Technical Conference and Exhibition, Washington, DC, 4–7 October. SPE-24753-MS. http://dx.doi.org/10.2118/24753-MS

Noteworthy papers in OnePetro

Use this section to list papers in OnePetro that a reader who wants to learn more should definitely read

External links

Use this section to provide links to relevant material on websites other than PetroWiki and OnePetro

See also

Geostatistics

Geostatistical reservoir modeling

Probability and uncertainty analysis

Spatial statistics

Basic elements of a reservoir characterization study

Page champions

Category