You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information

# Kriging and cokriging

Kriging and cokriging are geostatistical techniques used for interpolation (mapping and contouring) purposes. Both methods are generalized forms of univariate and multivariate linear regression models, for estimation at a point, over an area, or within a volume. They are linear-weighted averaging methods, similar to other interpolation methods; however, their weights depend not only on distance, but also on the direction and orientation of the neighboring data to the unsampled location.

## Contents

## Kriging

Whether by hand or by computer, contouring data requires the use of an interpolation procedure. Many algorithms are used in computer mapping, all of which require criteria to be satisfied. Quite often, though, a computer-generated map is unsatisfactory because it does not look real—that is, it does not depict geology as we envision—and so usually requires editing. The geoscientist working by hand interpolates between data points, draws connecting contours, and intuitively smoothes and biases the isolines to construct a map on the basis of a geologic model.

A variety of commonly used mapping algorithms can be used for computer interpretation. The weights used during the interpolation usually are based on the distance of each control point (sample value) from the target location (grid node). Control points closer to the target receive the larger weights; however, if the data exhibit strong anisotropy, it does not necessarily hold true that the closest control point should receive the greatest weight. Rather, more distant control points along the axis of maximum correlation should have greater influence on the interpolated value. ^{[1]}

Kriging’s practical strength as an interpolation method lies in its ability to capture anisotropy of the underlying geologic variables through the spatial covariance model, yielding maps that look more geologically plausible. From a statistical perspective, there is an additional strength when used under conditions of multivariate normality. The kriging algorithm has been given the acronym BLUE (best linear unbiased estimator). The fact that it is a linear, unbiased estimator is not unique. Essentially all the interpolation algorithms used for mapping have these attributes. It is the statement “best” that is intriguing—kriging is considered to be best because under these multivariate normality conditions, it minimizes the error variance associated with the estimate. That is to say, for the set of neighboring data used in an estimation, the set of weights calculated with kriging guarantees that no other method will generate a lower estimation error The “best” statement is clarified further by showing kriging’s connection with the condition of unbiasedness. Unbiasedness is assumed for all the interpolation algorithms, and means simply that, when mathematically interpolating, we expect to overestimate as often as underestimate. Thus, we can visualize the error in estimation as a bell-shaped curve with a mean of zero. It is this assurance of a balanced distribution of error variance that is as narrow as possible that has earned kriging the accolade “best.”

**Table 1** shows a summary description of kriging and cokriging.

### Kriging estimator

Several types of kriging methods are available, and they are distinguishable by how the mean value is determined and used during the interpolation process.^{[2]} ^{[3]} ^{[4]} ^{[5]} ^{[6]} The four most commonly used variations are:

- Simple kriging — The global mean is known (or can be supplied by the user) and is held constant over the entire area of interpolation.
- Ordinary kriging — The local mean varies and is re-estimated on the basis of nearby (or local) data values.
- Kriging with an external drift — The shape of the map is described by a secondary variable called the drift term (e.g., seismic data, gravity data, magnetic data, and Landsat data). Although this method uses two variables, only one covariance model is required and the shape comes from a related 2D or 3D attribute that guides the interpolation of the primary attribute known only at discrete locations. A typical application is time-to-depth conversion, where the primary attribute (depth at the wells) acquires its shape from the secondary attribute, the external drift (two-way-travel time known on a 2D grid).
- Indicator kriging — Estimates the probability of a discrete attribute at each grid node (e.g., lithology, productivity) and requires a binary coding of the attribute.

### General kriging system of equations

The unknown value z_{0}* is a linear combination of N values of a regionalized variable z(ui→):

where z_{0}* = the value at an unsampled location to be estimated from a linear combination of n values of a regionalized variable zi , with the same units as for the regionalized variable; λi = the weight of the regionalized variable zi at a given location, unitless; and zi = the regionalized variable at a given location, with the same units as for the regionalized variable. The λi values are determined according to three criteria: the weights sum to 1.0; the estimate is unbiased; and the estimation variance is minimized.

The kriging system is a set of N+1 linear equations with N+1 unknowns. The system of equations generally is written in terms of covariance and is the result of minimizing the variance:

for all i = 1, n. In **Eq. 2**, C_{0i} = the matrix notation for the covariance between a sample at a given location and the unsampled location (target), with the same units as for the regionalized variable, squared; C_{ij} = the covariance between two measured samples at given locations, where i and j are the indices of the sample pair and vary between the first and last measurements, with the same units as for the regionalized variable, squared; λ_{i} = the undetermined weight assigned to a given sample, and for which i is the index of that sample and varies between the first and last measurements, unitless; and μ = a Lagrange multiplier. In matrix shorthand, the formula is written as

where C = covariance matrix constructed from measured sample pairs in a given neighborhood, with the same units as for the regionalized variable, squared; λ = the vector of undetermined weights for measured samples within a neighborhood, unitless; and c = the vector of covariances constructed from measured samples in a given neighborhood to the target location, with the same units as for the regionalized variable, squared. In addition, the kriging variance is written as

where σk_{2} = the kriging variance, whose units are in terms of the regionalized variable, squared; C_{00} = the sill of the variogram or the value of the covariance at a lag beyond which the covariance no longer changes (usually a value close to zero), with the same units as for the regionalized variable, squared; λ_{i} = the undetermined weight assigned to a given sample and varies between the first and last measurements, unitless; C_{0i} = the covariance between a sample at a given location and the unsampled location (target), with the same units as for the regionalized variable, squared; and μ = a Lagrange multiplier.

### Kriging estimation variance

The estimation variance (*Eq. 2.9*) computed by all kriging programs provides the narrowest confidence interval about the estimate and thus produces the “best” estimate, but only under conditions of multivariate normality; however, if the distribution of data values departs from multivariate normality (a frequent occurrence), the kriging variance might not be precise and might only represent a measure of the relative goodness of the estimate.^{[7]} The estimation error is more reliably obtained by post-processing conditional simulations, which is discussed later.

### Search ellipse

Computer-mapping programs must be instructed in how to gather and use control points during interpolation. Most modelers who are familiar with computer mapping know that this involves designing a search ellipse, or search neighborhood. The “unique neighborhood” (“global neighborhood”) is the simplest type, uses all the data, and has an infinite radius. A “moving neighborhood” is a search strategy that uses only a portion of the total number of control points. Typically, the modeler must specify the radius length, the number of sectors, and the number of control points per sector. Most traditional mapping programs allow the user to specify only one radius defining a circular (isotropic) ellipse; however, during variographic analysis, we often find that the spatial model requires an anisotropic covariance function. Therefore, the search ellipse should be designed with radii lengths that are similar to the correlation scales (or their relative ratios), with its longest axis aligned with the direction of maximum correlation ^{[1]}

### Model cross validation

Cross-validation is a statistical method for testing the integrity of the covariance function and the neighborhood design. The procedure uses the kriging algorithm and allows us to compare estimated values at the measured value locations, just as one computes residuals between predicted and observed values in regression or analysis of variance.^{[2]} ^{[4]} ^{[8]} ^{[9]} The procedure is as follows:

- For each sample in the data set, compute a kriged estimate at the same location, using the covariance function and neighborhood, but ignoring that sample value during re-estimation. Thus, each sample value of the data set has a re-estimated value and a kriging variance.
- From this information, create a variety of displays. One useful display is a histogram of the standardized re-estimation error, which is the re-estimated value minus the measured values, divided by the kriging variance. A histogram that is symmetrical about its zero mean indicates a nonbiased model, which ensures that anywhere in the mapped area, interpolated values have an equal chance of being over or under the true value.
- It also is instructive to cross-plot and compute the correlation between the standardized re-estimation errors and the re-estimated values. There should be no correlation; if there is, the results are biased. Unbiasedness cannot be ensured using traditional regression methods because of the slope term b in the equation.

## Cokriging

### Cokriging estimator

Traditional regression methods only use data available at the target location and fail to use existing spatial correlations from secondary-data control points and the primary attribute to be estimated.^{[6]} Cokriging methods are used to take advantage of the covariance between two or more regionalized variables that are related, and are appropriate when the main attribute of interest (well data) is sparse, but related secondary information (seismic) is abundant. Geostatistical-data-integration methods yield more-reliable reservoir models because they capitalize on the strengths of both data types.

The mutual spatial behavior of regionalized variables (RVs) is known as co-regionalization. Cokriging requires the same conditions to be satisfied as kriging does, but demands more variography, modeling, and computation time. Interestingly, though, the support effect is handled in cokriging through the cross-covariance function, such that many different types and scales of data are integrated easily.8,2,4,5,6,10 ^{[2]} ^{[4]} ^{[5]} ^{[6]} ^{[8]} ^{[10]}

The common cokriging methods are multivariate extensions of the kriging system of equations, and use two or more additional attributes. The four common cokriging methods are simple, ordinary, collocated, and indicator.

Simple cokriging uses a multivariate spatial model and a related secondary 2D or 3D attribute to guide the interpolation of a primary attribute known only at well locations. The mean is specified explicitly and assumed to be a global constant.

Ordinary cokriging is similar to simple cokriging in that the mean is assumed constant, but it differs in that rather than being specified globally, the mean is estimated locally within each set of neighborhood control points.

Collocated cokriging can be simple or ordinary. It uses all the primary data, as before, but also includes one or more secondary-data values that have locations different than for the well data.

Indicator cokriging can be simple or ordinary, and estimates the probability or proportion of a discrete attribute at each grid node (e.g., lithology or productivity), and requires binary coding of the primary and secondary attributes. A modified form of indicator cokriging is indicator collocated cokriging.

### General Cokriging System of Equations

The unknown value z_{0}* is a linear combination of N values of two or more regionalized variables. The general equation for two-variable cokriging for which input data are located only at wells is

where z_{0}* = the estimate at the grid node; λ_{i} = the undetermined weight assigned to the primary sample z_{i} and varies between 0 and 100%; z_{i} = the regionalized variable at a given location, with the same units as for the regionalized variable; tj = the secondary regionalized variable that is co-located with the primary regionalized variable z_{i}, with the same units as for the secondary regionalized variable; and βj = the undetermined weight assigned to tj and varies between 0 and 100%.

The estimate is unbiased, with the estimation variance minimized, and requires the following:

- The spatial covariance model of the primary attribute (e.g., well data).
- The spatial covariance model of the secondary attribute (e.g., seismic data).
- The spatial cross-covariance model of primary and secondary attributes (e.g., well and seismic data).

Collocated cokriging’s advantage comes from its use of a secondary variable that is sampled at many locations, but is not necessarily coincident with the primary variable, as previously mentioned. One potential problem with any of the collated cokriging methods is that the secondary data can be overabundant relative to the primary variable. This generally is the case when a seismic attribute is used as the secondary variable, with relatively sparse well data used as the primary variable. To circumvent this problem, a reduced form of collocated cokriging is used, whereby the equation retains only the covariance of the primary variable z_{i} and the seismic samples (or other secondary data) that are coincident only with the target node (variable t0 , below).^{[2]}

This reduced form of the collocated cokriging estimator that retains only the covariance of the secondary data at the target node is written as

where z_{0}* = the estimate at the grid node; λi = the undetermined weight assigned to z_{i} and varies between 0 and 100%; z_{i} = the regionalized variable at a given location, with the same units as for the regionalized variable; t_{0} = the secondary regionalized variable located at the target location (grid node), with the same units as for the secondary variable; and β_{0} = the undetermined weight assigned to t_{0} and varies between zero and 100%.

The full version of the collocated cokriging estimator, as implemented by practitioners at the Centré de Géostatistiques (France),^{[11]} is written as

where z_{0}* = the estimate at the grid node; λ_{i} = the undetermined weight assigned to zi and varies between 0 and 100%; zi = the regionalized variable at a given location, with the same units as for the regionalized variable; t_{j} = the secondary regionalized variable co-located with zi, with the same units as for the secondary regionalized variable; β_{j} = the undetermined weight assigned to t_{j} and varies between 0 and 100%; t_{0} = the secondary variable located at the target location, with the same units as for the secondary variable; an β_{0} = the undetermined weights assigned to t0 and varies between 0 and 100%.

This form of the collocated cokriging estimator requires knowledge of the secondary variable t_{j} at all primary-data locations z_{i}, and at all grid nodes t0 being estimated. Both methods:

- Require only the simple covariance model of the primary attributes z
_{i}. When primary data are sparse, the covariance model often is derived from the covariance model of the densely sampled secondary attribute t_{j}. - Use all primary data (and all secondary data, for the full version located at the control points), according to the search criteria and the secondary data attribute located at the target grid node during estimation.
- Incorporate the Markov-Bayes assumption,
^{[2]}which says that if the secondary attribute covariance model is assumed to be proportional to the primary attribute covariance model, then the correlation coefficient and the ratio of the secondary variance to the primary variance transforms a univariate covariance model into a multivariate covariance model.

The corresponding cokriging system for determining the weights is the same as that of the general kriging system. As with kriging, there is an associated estimation variance for each interpolated value, and the same precautions hold true.^{[4]}

## Static data integration

So far the discussion on cokriging has covered the geostatistical techniques available for data integration when a correlation exists between two or more attributes. Three factors are important for effective data integration ^{[12]} ^{[13]}:

- The scale factor (support effect) — Each type of measurement has a scale factor associated with it that is related to the volume on which the measurement is made. These factors are microscale [e.g., scanning electron microscope (SEM), thin section], macroscale (e.g., core, wireline, cross-well seismic), and megascale (e.g., well tests, flow meters, surface seismic). Each measures different scales of reservoir heterogeneity.
- Measurement environment — Different methods provide measurements in the different environments. Core analyses typically are performed under ambient temperature and pressure, for example, whereas wireline methods provide indirect measurements at reservoir conditions.
- Measurement type — Reservoir property measurements are either direct (e.g., core analyses) or indirect (e.g., wireline, well tests).

Before attempting any type of data integration, it is important to understand the nature of the information obtained for each type of attribute measured. Reconciling the three integration factors above is not easy, and there is no procedure or set of absolute rules to follow to do so. Thus, generating a single, consistent value is not straightforward, especially when integrating static and dynamic information, such as core and well-test permeabilities.

### Methodology of data integration

Cosentino^{[12]} suggested that the best way to integrate data is stepwise, progressively integrating data from the smallest to the largest scale. An example of this approach is the following four-step process used by many investigators to integrate wireline and seismic data. ^{[14]} ^{[15]} ^{[16]} ^{[17]} ^{[18]} ^{[19]} ^{[20]} ^{[21]} ^{[22]} The basic idea is to use some attribute from the seismic volume (e.g., acoustic impedance) to guide the interpolation of a measured reservoir property (e.g., porosity).

#### Calibration

Well data are high-resolution, depth-related, local information, whereas 3D seismic-volume data are spatially dense but vertically lower-resolution, time-related information. These two data types require both vertical and areal calibration, which typically are carried out using synthetic seismograms.

#### Choosing the seismic attribute

Seismic processing and interpretation packages offer a plethora of seismic attributes of different natures: point or averaged-over-a-time windows that are based on amplitude, time, or a complex trace. Techniques are available to invert a 3D seismic amplitude cube into acoustic impedance, an attribute that is based on sonic velocity and rock density. The main objective in choosing a seismic attribute is to identify the one that works best as a predictor for the attribute of interest. Take care, though, because it is not unusual to find false correlations, ones that have no physical basis for the relationship. The probability of finding a false correlation increases with the number of seismic attributes considered and is inversely proportional to the number of data control points.^{[23]}

#### Prediction

In the prediction step, the areal distribution of the variable of interest is mapped by integrating the well data and the seismic attribute, using either linear or nonlinear regression models or preferably a geostatistical method such as collocated cokriging.

#### Cross-validation

Cross validation involves systematically removing wells from the data set, one by one, and re-estimating their values on the basis of the model selected. Although it is not always performed, cross-validation does provide a means of validating the contribution of the secondary information, which improves the prediction.

This stepwise approach can be used to integrate many data types. For example, it could be applied to core and wireline data. Cores are sparsely sampled relative to wireline logs. First, and importantly, depth-register the core data and logs. Next, select a well-log measurement that correlates to the core measurement, e.g., core-measured porosity, neutron density, or sonic transit time. For the best geologic representation, it is best to establish correlations on the basis of sequences and lithology. Once correlations are established, use regression or geostatistical methods to integrate the data. With the cokriging approach, core data are the primary data and the wireline measurements are the secondary information. As in the previous example, results can be cross-validated to demonstrate the contribution of the wireline data.

## Static and dynamic data integration

Regardless of the method, the high resolution geologic model (HRGM) typically is generated on the basis of static data only; however, this often presents a problem in the reservoir-simulation phase because modifications to the petrophysical properties are required for history matching. Conditioning of stochastic reservoir models with production data has been the focus of considerable research because of the perceived potential of such data integration. ^{[12]} ^{[24]} ^{[25]} ^{[26]} ^{[27]} A priori constraint and a posteriori constraint are the two main lines of this research.

### Priori and posteriori constraint

A priori constraint is the traditional approach to integrating well tests into geostatistical models. In it, the well-test data are entered directly into the numerical processing as input information to generate a random permeability field that honors the average well-test data around the wells. In a posteriori constraint, the stochastic model is modified after its generation and is forced to honor the well-test data. This is an inverse problem, where the 3D petrophysical distribution is perturbed until it satisfies some posterior constraint.

## Practical considerations

Tables 1 and 2 summarize key aspects, benefits, and limitations of kriging and cokriging.

## Nomenclature

## References

- ↑
^{1.0}^{1.1}Chambers, R.L., Yarus, J.M., and Hird, K.B. 2000. Petroleum Geostatistics for the Nongeostatistician—Part 1. The Leading Edge 19 (5): 474. http://dx.doi.org/10.1190/1.1438630. - ↑
^{2.0}^{2.1}^{2.2}^{2.3}^{2.4}Deutsch, C.V. and Journel, A.G. 1998. GSLIB: Geostatistical Software Library and User’s Guide, second edition. Oxford, UK: Oxford University Press. - ↑ Henley, S. 1981. Nonparametric Geostatistics. Essex, UK: Elsevier Applied Science Publishers.
- ↑
^{4.0}^{4.1}^{4.2}^{4.3}Hohn, M.E. 1999. Geostatistics and Petroleum Geology, second edition. Amsterdam, The Netherlands: Kluwer Academic Publishers. Cite error: Invalid`<ref>`

tag; name "r4" defined multiple times with different content - ↑
^{5.0}^{5.1}Wackernagel, H. 1995. Multivariate Geostatistics: An Introduction with Applications. Berlin, Germany: Springer-Verlag. - ↑
^{6.0}^{6.1}^{6.2}Journel, A.G. 1989. Fundamentals of Geostatistics in Five Lessons. Short Course in Geology, AGU, Washington, DC, 8, 40. - ↑ Journel, A.G. 1986. Geostatistics: Models and Tools for Earth Sciences. Math. Geol. 18: 119.
- ↑
^{8.0}^{8.1}Isaaks, E.H. and Srivastava, R.M. 1989. An Introduction to Applied Geostatistics. Oxford, UK: Oxford University Press. - ↑ Olea, R.A. 1994. Fundamentals of Semivariogram Estimation, Modeling, and Usage. Stochastic Modeling and Geostatistics, 3, 27-35, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
- ↑ Doyen, P.M., Psaila, D.E., and Strandenes, S. 1994. Bayesian Sequential Indicator Simulation of Channel Sands From 3-D Seismic Data in The Oseberg Field, Norwegian North Sea. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 25-28 September 1994. SPE-28382-MS. http://dx.doi.org/10.2118/28382-MS
- ↑ Bleines, C. et al. 2001. ISATIS. Avon Cedex, France: Geovariances.
- ↑
^{12.0}^{12.1}^{12.2}Cosentino, L. 2001. Integrated Reservoir Studies. Paris, France: Institut Français du Pétrole Publications, Editions Technip. Cite error: Invalid`<ref>`

tag; name "r12" defined multiple times with different content Cite error: Invalid`<ref>`

tag; name "r12" defined multiple times with different content - ↑ Ahmed, U., Crary, S.F., and Coates, G.R. 1991. Permeability Estimation: The Various Sources and Their Interrelationships. J Pet Technol 43 (5): 578-587. SPE-19604-PA. http://dx.doi.org/10.2118/19604-PA.
- ↑ Doyen, P.M. 1988. Porosity from Seismic Data: A Geostatistical Approach. Geophysics 53: 1263.
- ↑ Doyen, P.M. and Guidish, T.M. 1992. Seismic Discrimination of Lithology and Porosity, a Monte Carlo Approach. Reservoir Geophysics: Investigations in Geophysics, 7, 243-250, ed. R.E. Sheriff. Tulsa, Oklahoma: SEG.
- ↑ Xu, W., Tran, T.T., Srivastava, R.M. et al. 1992. Integrating Seismic Data in Reservoir Modeling: The Collocated Cokriging Alternative. Presented at the SPE Annual Technical Conference and Exhibition, Washington, D.C., 4-7 October 1992. SPE-24742-MS. http://dx.doi.org/10.2118/24742-MS.
- ↑ Araktingi, U.G. et al. 1993. Integration of Seismic and Well Log Data in Reservoir Modeling. Reservoir Characterization III, 515-554. ed. B. Linville. Tulsa, Oklahoma: PennWell Publishing.
- ↑ Chambers, R.L., Zinger, M.A., and Kelly, M.C. 1994. Constraining Geostatistical Reservoir Descriptions with 3-D Seismic Data to Reduce Uncertainty. Stochastic Modeling and Geostatistics, 3, 143-158, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
- ↑ Bashore, W.M. et al. 1994. Importance of a Geological Framework and Seismic Data Integration for Reservoir Modeling and Subsequent Fluid-Flow Predictions. Stochastic Modeling and Geostatistics, 3, 159-176, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
- ↑ Wolf, D.J., Withers, K.D, and Burnaman, M.D. 1994. Integration of Well and Seismic Data Using Geostatistics. Stochastic Modeling and Geostatistics, 3, 177-200, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology, AAPG.
- ↑ Burns, C.S. 1997. Integrating 3D Seismic into the Reservoir Model, and Its Impact on Reservoir Management. Presented at the Latin American and Caribbean Petroleum Engineering Conference, Rio de Janeiro, Brazil, 30 August-3 September 1997. SPE-38996-MS. http://dx.doi.org/10.2118/38996-MS.
- ↑ Behrens, R.A. and Tran, T.T. 1998. Incorporating Seismic Data of Intermediate Vertical Resolution Into 3D Reservoir Models. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49143-MS. http://dx.doi.org/10.2118/49143-MS.
- ↑ Kalkomey, C.T. 1997. Potential Risks When Using Seismic Attributes as Predictors of Reservoir Properties. The Leading Edge 16 (3): 247. http://dx.doi.org/10.1190/1.1437610
- ↑ Alabert, F.G. 1989. Constraining Description of Randomly Heterogeneous Reservoirs to Pressure Test Data: A Monte Carlo Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 8-11 October 1989. SPE-19600-MS. http://dx.doi.org/10.2118/19600-MS
- ↑ Deutsch, C.V. 1993. Conditioning Reservoirs to Well Test Information. Geostatistics Troia ‘92, 505, ed. A.O. Soars. Dordrecht, The Netherlands: Kluwer Academic Publishers.
- ↑ Deutsch, C.V. and Journel, A.G. 1994. Integrating Well Test-Derived Effective Absolute Permeabilities in Geostatistical Reservoir Modeling. Stochastic Modeling and Geostatistics, 3, 131-142, ed. J.M. Yarus and R.L. Chambers. Tulsa, Oklahoma: AAPG Computer Applications in Geology.
- ↑ Blanc, G., Guerillot, D., Rahon, D. et al. 1996. Building Geostatistical Models Constrained by Dynamic Data - A Posteriori Constraints. Presented at the European 3-D Reservoir Modelling Conference, Stavanger, Norway, 16-17 April 1996. SPE-35478-MS. http://dx.doi.org/10.2118/35478-MS

## Noteworthy papers in OnePetro

Use this section to list papers in OnePetro that a reader who wants to learn more should definitely read

## External links

Use this section to provide links to relevant material on websites other than PetroWiki and OnePetro

## See also

Probability and uncertainty analysis

PEH:Geologically_Based,_Geostatistical_Reservoir_Modeling