You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information


Uncertainty analysis in creating production forecast: Difference between revisions

PetroWiki
Jump to navigation Jump to search
(basic)
(.)
 
Line 1: Line 1:
[http://petrowiki.org/Sandbox:Uncertainty_range_in_production_forecasting Uncertainty range in production forecasting] gives an introduction to uncertainty analysis in production forecasting, including a PRMS based definition of low, best and high production forecasts. This page topic builds on this with more details of how to approach uncertainty analysis as part of creating production forecasts.
[http://petrowiki.org/Sandbox:Uncertainty_range_in_production_forecasting Uncertainty range in production forecasting] gives an introduction to uncertainty analysis in production forecasting, including a PRMS based definition of low, best and high production forecasts. This page topic builds on this with more details of how to approach uncertainty analysis as part of creating production forecasts.


Probabilistic subsurface assessments are the norm within the exploration side of the oil and gas industry, both in majors and independents<ref>Rose, P. 2007. Measuring what we think we have found: Advantages of probabilistic over deterministic methods for estimating oil and gas reserves and resources in exploration and production. AAPG Bulletin 91 (1): 21–29. http://dx.doi.org/10.1306/08030606016.</ref>. However, in many companies, the production side is still in transition from single-valued deterministic assessments, sometimes carried out with ad-hoc sensitivity studies, to more-rigorous probabilistic assessments with an auditable trail of assumptions and a statistical underpinning. Reflecting these changes in practices and technology, recently SEC rules for reserves reporting (effective 1 January 2010) were revised, in line with PRMS, to allow for the use of both probabilistic and deterministic methods in addition to allowing reporting of reserves categories other than “proved.” This section attempts to present some of the challenges facing probabilistic assessments and present some practical considerations to carry out the assessments effectively. Add content
Probabilistic subsurface assessments are the norm within the exploration side of the oil and gas industry, both in majors and independents<ref>Rose, P. 2007. Measuring what we think we have found: Advantages of probabilistic over deterministic methods for estimating oil and gas reserves and resources in exploration and production. AAPG Bulletin 91 (1): 21–29. http://dx.doi.org/10.1306/08030606016.</ref>. However, in many companies, the production side is still in transition from single-valued deterministic assessments, sometimes carried out with ad-hoc sensitivity studies, to more-rigorous probabilistic assessments with an auditable trail of assumptions and a statistical underpinning. Reflecting these changes in practices and technology, recently SEC rules for reserves reporting (effective 1 January 2010) were revised, in line with PRMS, to allow for the use of both probabilistic and deterministic methods in addition to allowing reporting of reserves categories other than “proved.” This section attempts to present some of the challenges facing probabilistic assessments and present some practical considerations to carry out the assessments effectively.  


It should be noted that for simplicity the examples referred to in this section are about calculating OOIP rather than generating probabilistic production forecasts directly. Clearly OOIP/GOIP is the starting point of any production forecast and gives a firm basis from which to build production forecasts. However, &nbsp;the outcome of probabilistic assessments are usually a set of deterministic models tied to distributions of not only OOIP but also recovery factor, initial production rate, and/or other scalar descriptions of a reservoir. These deterministic models can then be used to generate a set of low, best, and high production forecasts.
It should be noted that for simplicity the examples referred to in this section are about calculating OOIP rather than generating probabilistic production forecasts directly. Clearly OOIP/GOIP is the starting point of any production forecast and gives a firm basis from which to build production forecasts. However, &nbsp;the outcome of probabilistic assessments are usually a set of deterministic models tied to distributions of not only OOIP but also recovery factor, initial production rate, and/or other scalar descriptions of a reservoir. These deterministic models can then be used to generate a set of low, best, and high production forecasts.
Line 77: Line 77:


<span style="color: rgb(34, 34, 34); font-family: arial, sans-serif; font-size: 9.5pt;"></span>[[File:OOIP Cumulative frequency distributions-Wolff.jpg|Figure 1 OOIP Cumulative frequency distributions]]
<span style="color: rgb(34, 34, 34); font-family: arial, sans-serif; font-size: 9.5pt;"></span>[[File:OOIP Cumulative frequency distributions-Wolff.jpg|Figure 1 OOIP Cumulative frequency distributions]]





Latest revision as of 15:19, 10 December 2019

Uncertainty range in production forecasting gives an introduction to uncertainty analysis in production forecasting, including a PRMS based definition of low, best and high production forecasts. This page topic builds on this with more details of how to approach uncertainty analysis as part of creating production forecasts.

Probabilistic subsurface assessments are the norm within the exploration side of the oil and gas industry, both in majors and independents[1]. However, in many companies, the production side is still in transition from single-valued deterministic assessments, sometimes carried out with ad-hoc sensitivity studies, to more-rigorous probabilistic assessments with an auditable trail of assumptions and a statistical underpinning. Reflecting these changes in practices and technology, recently SEC rules for reserves reporting (effective 1 January 2010) were revised, in line with PRMS, to allow for the use of both probabilistic and deterministic methods in addition to allowing reporting of reserves categories other than “proved.” This section attempts to present some of the challenges facing probabilistic assessments and present some practical considerations to carry out the assessments effectively.

It should be noted that for simplicity the examples referred to in this section are about calculating OOIP rather than generating probabilistic production forecasts directly. Clearly OOIP/GOIP is the starting point of any production forecast and gives a firm basis from which to build production forecasts. However,  the outcome of probabilistic assessments are usually a set of deterministic models tied to distributions of not only OOIP but also recovery factor, initial production rate, and/or other scalar descriptions of a reservoir. These deterministic models can then be used to generate a set of low, best, and high production forecasts.

Look-backs - calibrating assessments

Look-backs continue to show the difficulty of achieving a forecast within an uncertainty band along with the difficulty of establishing what that band should be. Demirmen (2007)[2] reviewed reserves estimates in various regions over time and observed that estimates are poor and that uncertainty does not necessarily decrease over time.

Otis and Schneidermann (1997)[3] describe a comprehensive exploration-prospect-evaluation system that, starting in 1989, included consistent methods of assessing risk and estimating hydrocarbon volumes, including post-drilling feedback to calibrate those assessments. Although detailed look-backs for probabilistic forecasting methodologies have been recommended for some time[4] and are beginning to take place within companies, open publications on actual fields with details still are rare, possibly because of the newness of the methodologies or because of data sensitivity. Back to top

Identifying subsurface uncertainties

A systematic process of identifying relevant subsurface uncertainties and then categorizing them can help by breaking down a complex forecast into simple uncontrollable static or dynamic components that can be assessed and calibrated individually[5]. Non-subsurface, controllable, and operational uncertainties must also be considered, but the analysis usually is kept tractable by including them later with decision analysis or additional rounds of uncertainty analysis. In fact, operational uncertainties are often more significant than uncontrollable subsurface uncertainties as Castle (1986)[6] describes in a North Sea example – 52% of fields didn’t meet their peak production forecast, but an overwhelming 87% of the fields were late in achieving scheduled first production.

Grouping parameters also can reduce the dimensionality of the problem. When parameters are strongly correlated (or anti-correlated), grouping them is justifiable. In fact, not grouping or “Balkanizing” a group of such parameters could cause them to be dropped in standard screening methods such as Pareto charts. For example, decomposing a set of relative permeability curves into constituent parameters, such as saturation endpoints, critical saturations, relative permeability endpoints, and Corey exponents, can cause them all to become insignificant individually. Used together, relative permeability often remains a dominant uncertainty.

Back to top

Assigning ranges to uncertainties

Having been identified, ranges for each uncertainty must be quantified, which may appear straightforward but contains subtle challenges. Breaking down individual uncertainties into components (e.g., measurement, model, or statistical error) and carefully considering portfolio and sample-bias effects can help create reasonable and justifiable ranges.

Some uncertainties, especially geological ones, are not handled easily as continuous variables. In many studies, several discrete geological models are constructed to represent the spectrum of possibilities. To integrate these models with continuous parameters and generate outcome distributions, likelihoods must be assigned to each model. Although assigning statistical meaning to a set of discrete models may be a challenge if those models are not based on any underlying statistics, the models do have the advantage of more readily being fully consistent scenarios rather than a combination of independent geological-parameter values that may not make any sense.[7]

As noted previously, validation with analog data sets and look-backs should be carried out when possible because many studies and publications have shown that people have a tendency to anchor on what they think they know and to underestimate the true uncertainties involved. Therefore, any quantitative data that can help establish and validate uncertainty ranges are highly valuable.

Back to top

Assigning distributions to uncertainties

In addition to ranges, distributions must be specified for each uncertainty. There are advocates for different approaches. Naturalists strongly prefer the use of realistic distributions that often are observed in nature (e.g., log normal), while pragmatists prefer distributions that are well-behaved (e.g., bounded) and simple to specify (e.g., uniform or triangular). In most cases, specifying ranges has a stronger influence on forecasts than the specific distribution shape, which may have little effect. Statistical correlations between uncertainties also should be considered, although these too are often secondary effects.

To demonstrate that ranges are often more influential than distribution shapes, a simple Monte-Carlo exercise forecasting OOIP with five input uncertainties was conducted with all uncertainties given the same type of distribution (either normal, triangular, uniform, or truncated-normal (±3σ) distributions) but with identical P10, P50, and P90 values. Table 1 shows the uncertainties, their ranges, and the OOIP equation.

Uncertainty P90 P50 P10
area [ac] 150 200 250
hnet [ft] 200 300 400
Phie 0.15 0.20 0.25
Sw 0.25 0.30 0.35
Boi [rb/stb]
1.05 1.10 1.15
Table 1—OOIP Monte-Carlo test uncertainty ranges

OOIP [stb] = 7758 * area * hnet * phie * (1-Sw) / Boi

Also tested were log-normal and skewed-triangular distributions with the same P10 and P90 values as the other distributions, but necessarily different modes (with the skewed-triangular mode set to the log-normal mode).Fig 1 and Fig 2 show the cumulative frequency and frequency plots of the OOIP distributions which result.

Back to top


Figure 1 OOIP Cumulative frequency distributions


Figure 2 OOIP Frequency distributions

Back to top

In this example, choice of distribution has relatively little effect. Compared to the truncated-normal distribution, the maximum difference between symmetrical distributions in the P10-P90 range was around 5%, with most differences only around 1%. Counter-intuitively, the widest P10-P90 spread was actually obtained with the normal distribution, while the uniform distribution leads to the narrowest spread. Although the uniform distribution has less central weight, it also requires smaller absolute limits outside the P10-P90 range – for example the uniform distribution for area has absolute limits of 137.5 to 262.5 acres while the triangular distribution requires end points of 109.5 to 290.5 acres to match the P90 and P10 values. The increase in the absolute range of the distribution compensates somewhat for the central weighting of distributions, minimizing the impact of the specific distribution shape used. Therefore, when specifying P10/50/90 for symmetrical distributions, it makes little practical difference which one you chose (at least in this example although the result may be more general).

Unsurprisingly (since the central part of each input distribution is changed), the skewed distributions show more differences – the log-normal case is up to 13% different (around the P40-P30 level) from the truncated-normal distribution. However, the difference between the two skewed distributions is small, a maximum of 3% in the P10-P90 range. One could therefore conclude that the general choice of skewed versus non-skewed distributions is more influential than the specific distribution selected within each class.

Differences are greater outside the P10-P90 range, but the significance of distribution tails in practical oilfield problems is limited. It is often preferable to avoid presenting tails of distributions to preempt the interminable nit-picking that ensues when someone who doesn’t fully understand the process spots a few negative recovery factors or oil-in-places out of 100,000 Monte-Carlo trials. Obviously such values don’t make physical sense, but when sampling distributions with non-physical proxies, you can get a few non-physical values due to proxy error and/or unbounded uncertainty distributions.

Criticality of distribution selection is quite different when defining distributions by extreme values. Running a similar Monte-Carlo exercise specifying only minimum and maximum values (P0 to P100) for uniform and triangular distributions gives the results shown in Fig 3. The curves are now distinctly different and the uniform distribution does have the widest P10-P90 spread which one would intuitively expect. Therefore, when specifying absolute minimum and maximum values for distributions, more attention must be paid to the distribution shape than when specifying input P10/50/90’s.

Back to top

Figure 3 Monte-Carlo with P0-P100 uniform and triangular distributions

It’s worth noting that using discrete distributions can give P10/50/90 results similar to using equivalent continuous distributions, but the shape of the forecast distribution can be peculiar. With a single dominant uncertainty, the resulting forecast will have a tri-modal distribution. This can be difficult to justify unless the uncertainty involved really only has 3 physical states. The underlying uncertainties often really are continuous but only 3 levels are used for practical reasons. However, workflows discussed below lend themselves to using such distributions which also fit well with decision trees.

Back to top

Uncertainty-to-forecast relationships

Having been identified and quantified, relationships then must be established between uncertainties and forecasts. These relationships sometimes can be established from analytical and empirical equations but also may be derived from models ranging from simple material-balance through full 3D reservoir-simulation models. When complex models are used to define relationships, it is often useful to use Design of Experiments (DoE) methods to investigate the uncertainty space efficiently. These methods involve modeling defined combinations of uncertainties to fit simple equations that can act as efficient surrogates or proxies for the complex models. Monte-Carlo methods then can be used to investigate the distribution of forecast outcomes, taking into account correlations between uncertainties.

DoE methods have been used for many years in the petroleum industry. The earliest SPE reference found was from a perforating-gun study by Vogel (1956), the earliest reservoir work was on a wet-combustion-drive study by Sawyer et al. (1974)[8], while early references to 3D reservoir models include Chu (1990)[9] and Damsleth et al. (1992).[10] These early papers all highlight the main advantage of DoE over traditional one-variable-at-a-time (OVAT) methods—efficiency. Damsleth et al. (1992)[10] give a 30 to 40% advantage for D-optimal designs compared with OVAT sensitivities.

For an extensive bibliography of papers showing pros and cons of different types of DoE and application of DoE to specific reservoir problems, see Wolff (2010).[11]

Back to top

Model complexity

Given that computational power has increased vastly from the 1950s and 1970s to ever-more-powerful multicore processors and cluster computing, an argument can be made that computational power should not be regarded as a significant constraint for reservoir studies. However, Williams et al. (2004)[12] observe that gains in computational power “are generally used to increase the complexity of the models rather than to reduce model run times.”

Most would agree with the concept of making things no more complex than needed, but different disciplines have different perceptions regarding that level of complexity. This problem can be made worse by corporate peer reviews, especially in larger companies, in which excessively complex models are carried forward to ensure “buy in” by all stakeholders. Highly complex models also may require complex logic to form reasonable and consistent development scenarios for each run.

Finally, the challenge of quality control (QC) of highly complex models cannot be ignored—“garbage in, garbage out” applies more strongly than ever. Launching directly into tens to hundreds of DoE runs without ensuring that a base-case model makes physical sense and runs reasonably well will often lead to many frustrating cycles of debug and rework. A single model can readily be quality controlled in detail, while manual QC of tens of models becomes increasingly difficult. With hundreds or thousands of models, automatic-QC tools become necessary to complement statistical methods by highlighting anomalies.

Back to top

Figure 4 Proxy surfaces of varying complexity


Proxy equations

Fig 4 shows proxy surfaces of varying complexity that can be obtained with different designs. A Plackett-Burman (PB) design, for example, is suitable only for linear proxies. Folded-Plackett-Burman (FPB) (an experimental design with double the number of runs of the PB design formed by adding runs reversing the plus and minus 1s in the matrix) can provide interaction terms and lumped second-order terms (all second-order coefficients equal). D-optimal can provide full second-order polynomials. More-sophisticated proxies can account for greater response complexities, but at the cost of additional refinement simulation runs. These more-sophisticated proxies may be of particular use in brownfield studies in which a more-quantitative proxy could be desirable, but may not always add much value to green field studies where the basic subsurface uncertainties remain poorly constrained.

A recognized problem with polynomial proxies is that they tend to yield normal distributions because the terms are added (a consequence of the Central Limit theorem). For many types of subsurface forecasts, the prevalence of actual skewed distributions, such as log-normal, has been documented widely. Therefore, physical proxies, especially in simple cases such as the original-oil-in-place (OOIP) equation, have some advantages in achieving more-realistic distributions. However, errors from the use of nonphysical proxies are not necessarily significant, depending on the particular problem studied. Returning to the Monte-Carlo OOIP exercise presented above, Fig 5 compares the distributions of a physical proxy (the OOIP equation) against a 243-run full-factorial design (FFD) and a 17-run FPB polynomial proxy. The FFD distribution and P10/50/90 values are very close to the OOIP equation results (within 1%) while the FPB distribution is visibly different, but the P10/50/90 values are still within 3%. In actual application, uncertainties in determining input parameter ranges dominate such minor differences.

Back to top


Figure 5 Monte-Carlo with physical equation vs Two experimental-design proxies

A question raised about computing polynomial proxies for relatively simple designs such as FPB is that often there are, apparently, too few equations to solve for all the coefficients of the polynomial. The explanation is that not all parameters are equally significant and that some parameters may be highly correlated or anticorrelated. Both factors reduce the dimensionality of the problem, allowing reasonable solutions to be obtained even with an apparently insufficient number of equations. Fig 6 is a Pareto chart of the OOIP example which shows that several parameters, Sw and Boi, are relatively insignificant compared with the other parameters, at least for the parameter ranges specified.


Figure 6 Pareto chart for OOIP example

Back to top

Proxy validation

At a minimum, a set of blind test runs, which are not used in building the proxy, should be compared with proxy predictions. A simple crossplot of proxy-predicted vs. experimental results for points used to build the proxy can confirm only that the proxy equation was adequate to match data used in the analysis. However, it does not prove that the proxy is also predictive.

Turning one last time to the OOIP example, Fig 7 is a crossplot of FPB-proxy-predicted versus analytical results. The blind test points were taken from a three-level D-Optimal table to test the two-level FPB (plus one center point) proxy. The results here are qualitatively reasonable, which may be unsurprising for something as simple and smooth (albeit non-linear) as the OOIP equation. In general, volumetrics are more reasonably predicted with simpler proxies than are dynamic results such as recoveries, rates, and breakthrough times.


Figure 7 OOIP FPB proxy and blind test crossplot

Back to top

Moving toward an experimental design common process

Some standardization of designs would help these methods become even more accepted and widespread in companies. Another benefit of somewhat standardized processes is that management and technical reviewers can become familiar and comfortable with certain designs and will not require re-education with each project they need to approve. However, because these methodologies are still relatively new, a period of testing and exploring different techniques is still very much under way.

The literature shows the use of a wide range of experimental design methodologies. Approaches to explore uncertainty space range from use of only the simplest PB screening designs for the entire analysis, through multistage experimental designs of increasing accuracy, to bypassing proxy methods altogether in favor of space-filling designs and advanced interpolative methods. A basic methodology extracted from multiple papers listed in Wolff (2010)[11] can be stated as follows:

  • Define subsurface uncertainties and their ranges.
  • Perform screening analysis (e.g., two-level PB or FPB) and analyze to identify the most influential uncertainties.
  • If necessary, perform a more detailed analysis (e.g., three-level D-optimal or central-composite).
  • Create a proxy model (response surface) by use of linear or polynomial proxies and validate with blind tests.
  • Perform Monte-Carlo simulations to assess uncertainty and define distributions of outcomes.
  • Build deterministic (scenario) low/mid/high models tied to the distributions.
  • Use deterministic models to assess development alternatives.

However, variations and subtleties abound. Most studies split the analysis of the static and dynamic parameters into two stages with at least two separate experimental designs. The first stage seeks to create a number of discrete geological or static models (3, 5, 9, 27, or more are found in the literature) representing a broad range of hydrocarbons in place and connectivity (often determined by rapid analyses such as streamline simulation). Then, the second stage takes these static models and adds the dynamic parameters in a second experimental design. This method is particularly advantageous if the project team prefers to use higher-level designs such as D-optimal to reduce the number of uncertainties in each stage. However, this method cannot account for the full range of individual interactions between all static and dynamic parameters because many of the static parameters are grouped and fixed into discrete models before the final dynamic simulations are run. This limitation becomes less significant when more discrete geological models are built that reflect more major uncertainty combinations.

Steps 2 and 3 in the base methodology sometimes coincide with the static/dynamic parameter split. In many cases, however, parameter screening is performed as an additional DoE step after having already determined a set of discrete geological models. This culling to the most influential uncertainties again makes running a higher-level design more feasible, especially with full-field full-physics simulation models. The risk is that some of the parameters screened from the analysis as insignificant in the development scenario that was used may become significant under other scenarios. For example, if the base scenario was a peripheral waterflood, parameters related to aquifer size and strength may drop out. If a no-injector scenario is later examined, the P10/50/90 deterministic models may not include any aquifer variation. Ideally, each scenario would have its own screening DoE performed to retain all relevant influential uncertainties.

An alternative is running a single-stage DoE including all static and dynamic parameters. This method can lead to a large number of parameters. Analysis is made more tractable by use of intermediate-accuracy designs such as FPB. Such compromise designs do require careful blind testing to ensure accuracy although proxies with mediocre blind-test results often can yield very similar statistics (P10/50/90 values) after Monte Carlo simulation when compared with higher-level designs. As a general observation, the quality of proxy required for quantitative predictive use such as optimization or history matching usually is higher than that required only for generating a distribution through Monte Carlo methods.

Determining which development options (i.e., unconstrained or realistic developments, including controllable variables) to choose for building the proxy equations and running Monte Carlo simulations also has challenges. One approach is to use unconstrained scenarios that effectively attempt to isolate subsurface uncertainties from the effects of these choices (Williams 2006). Another approach is to use a realistic base-case development scenario if such a scenario already exists or make an initial pass through the process to establish one. Studies that use DoE for optimization often include key controllable variables in the proxy equation despite knowing that this may present difficulties such as more-irregular proxy surfaces requiring higher-level designs.

Integrated models consider all uncertainties together (including surface and subsurface), which eliminates picking a development option. See Section 8.8 for more details on integrated modeling. These models may be vital for the problem being analyzed; however, they present additional difficulties. Either computational costs will increase or compromises to subsurface physics must be made such as eliminating reservoir simulation in favor of simplified dimensionless rate-vs.-cumulative-production tables or proxy equations. That reopens the questions: “What development options were used to build those proxies?” and “How valid are those options in other scenarios?”

Back to top

Deterministic models

Short of using integrated models, there still remains the challenge of applying and optimizing different development scenarios to a probabilistic range of forecasts. Normal practice is to select a limited number of deterministic models that capture a range of outcomes, often three (e.g., P10/50/90) but sometimes more if testing particular uncertainty combinations is desired. Normal practice is to match probability levels of two outcomes at once (e.g., pick a P90 model that has both P90 OOIP and P90 oil recovery). Some studies attempt to match P90 levels of other outcomes at the same time, such as discounted oil recovery (which ties better to simplified economics because it puts a time value on production), recovery factor, or initial production rate. The more outcome matches that are attempted, the more difficult it is to find a suitable model.

The subsurface uncertainties selected to create a deterministic model, and how much to vary each of them, form a subjective exercise because there are an infinite number of combinations. Williams (2006)[5] give guidelines for building such models, including trying to ensure a logical progression of key uncertainties from low to high models.

If a proxy is quantitatively sound, it can be used to test particular combinations of uncertainties that differ from those of the DoE before building and running time-consuming simulation models. The proxy also can be used to estimate response behavior for uncertainty levels between the two or three levels (−1/0/+1) typically defined in the DoE. This can be useful for tuning a particular combination to achieve a desired response, and it allows moderate combinations of uncertainties. Such moderate combinations, rather than extremes used in many designs, will be perceived as more realistic. This choice also will solve the problem of not being able to set all key variables to −1 or +1 levels and follow a logical progression of values to achieve P90 and P10 outcomes. However, interpolation of uncertainties can sometimes be:

  • Challenging (some uncertainties, such as permeability, may not vary linearly compared with others, such as porosity)
  • Challenging and time-consuming (e.g., interpolating discrete geological models)
  • Impossible [uncertainties with only a discrete number of physical states such as many decision variables (e.g., 1.5 wells is not possible)]

Finally, selecting the deterministic models to use is usually a whole-team activity because each discipline may have its own ideas about which uncertainties need to be tested and which combinations are realistic. This selection process achieves buy-in by the entire team before heading into technical and management reviews.

Probabilistic brown field forecasting has the additional challenge of needing to match dynamic performance data. Although forecasts should become more tightly constrained with actual field data, data quality and the murky issue of what constitutes an acceptable history match must be considered. History-match data can be incorporated into probabilistic forecasts through several methods. The traditional and simplest method is to tighten the individual uncertainty ranges until nearly all outcomes are reasonably history matched. This approach is efficient and straightforward but may eliminate some more extreme combinations of parameters from consideration.

Filter-proxy methods that use quality-of-history-match indicators (Landa and Güyagüler 2003)[13] will accept these more extreme uncertainty combinations. The filter-proxy method also has the virtue of transparency--explanation and justification of the distribution of the matched models is straightforward, as long as the proxies (especially those of the quality of the history match) are sufficiently accurate. More complex history-matching approaches, such as genetic algorithms, evolutionary strategies, and ensemble Kalman filter, are a very active area for research and commercial activities, but going into any detail on these methods is beyond the scope of this section.

Back to top

What do we really know?

In realistic subsurface-forecasting situations, enough uncertainty exists about the basic ranges of parameters that absolute numerical errors less than 5 to 10% usually are considered relatively minor, although it is difficult to give a single value that applies for all situations. For example, when tuning discrete models to P10/50/90 values, a typical practice is to stop tuning when the result is within 5% of the desired result. Spending a lot of time to obtain a more precise result is largely a wasted effort, as look-backs have shown consistently.

Brown field forecasts are believed to be more accurate than green field forecasts that lack calibration, but look-backs also suggest that it may be misleading to think the result is the correct one just because a great history match was obtained. Carrying forward a reasonable and defensible set of working models that span a range of outcomes makes much more sense than hoping to identify a single “true” answer. As George Box (an eminent statistician) once said: “All models are wrong, but some are useful.”

In all these studies, there is a continuing series of tradeoffs to be made between the effort applied and its effect on the outcome. Many studies have carried simple screening designs all the way through to detailed forecasts with well-accepted results that are based on very few simulation runs. These studies tend to study the input uncertainty distributions in great depth, often carefully considering partial correlations between the uncertainties. Although the quality of the proxies used in these studies may not be adequate for quantitative predictive use, it still may be adequate for generating reasonable statistics.

Other studies use complex designs to obtain highly accurate proxies that can be used quantitatively for optimization and history matching. However, many of these studies have used standardized uncertainty distributions (often discrete) with less consideration of correlations and dependencies. Higher-speed computers and automated tools are making such workflows less time-consuming so that accurate proxies and a thorough consideration of the basic uncertainties should both be possible. Whichever emphasis is made, the models that are used should be sufficiently complex to capture the reservoir physics that influences the outcome significantly. At the same time, they should be simple enough such that time and energy are not wasted on refining something that either has little influence or remains fundamentally uncertain.

In the end, probabilistic forecasts can provide answers with names like P10/50/90 that have specific statistical meaning. However, it is a meaning that must consider the assumptions made about the statistics of the basic uncertainties, most of which lack a rigorous statistical underpinning. The advantage of a rigorous process to combine these uncertainties through DoE, proxies, Monte Carlo methods, scenario modeling, and other techniques is that the process is clean and auditable, not that the probability levels are necessarily quantitatively correct. However, they are as correct as the selection and description of the basic uncertainties.

Having broken a complex forecast into simple assumptions, it should become part of a standard process to refine those assumptions as more data become available. Ultimately, like the example from exploration mentioned at the beginning, we hope to calibrate ourselves through detailed look-backs for continuous improvement of our forecast quality.

Back to top

References

  1. Rose, P. 2007. Measuring what we think we have found: Advantages of probabilistic over deterministic methods for estimating oil and gas reserves and resources in exploration and production. AAPG Bulletin 91 (1): 21–29. http://dx.doi.org/10.1306/08030606016.
  2. Demirmen, F. 2007. Reserves Estimation: The Challenge for the Industry. Society of Petroleum Engineers. http://dx.doi.org/10.2118/103434-JPT
  3. Otis, R.M. and Schneidermann, N. 1997. A process for evaluating exploration prospects. AAPG Bulletin 81 (7): 1087–1109. http://archives.datapages.com/data/bulletns/1997/07jul/1087/1087.htm
  4. Murtha, J. 1997. Monte Carlo Simulation: Its Status and Future. Distinguished Author Series, J Pet Technol 49 (4): 361–370. SPE37932-MS. http://dx.doi.org/10.2118/37932-MS.
  5. 5.0 5.1 Williams, M.A. 2006. Assessing Dynamic Reservoir Uncertainty: Integrating Experimental Design with Field Development Planning. SPE Distinguished Lecturer Series presentation given for Gulf Coast Section SPE, Houston, 23 March.
  6. Castle, G. R. 1986. North Sea Score Card. Society of Petroleum Engineers. http://dx.doi.org/10.2118/15358-MS
  7. Bentley, M.R. and Woodhead, T.J. 1998. Uncertainty Handling Through Scenario-Based Reservoir Modeling. Paper SPE 39717 presented at the SPE Asia Pacific Conference on Integrated Modeling for Asset Management, Kuala Lumpur, 23–24 March. http://dx.doi.org/10.2118/39717-MS.
  8. Sawyer. D.N., Cobb, W.M., Stalkup, F and Braun, P 1974. .I., .H. Factorial Design Analysis of Wet-Combustion Drive. SPE J. 14 (1): 25–34. SPE-4140-PA. http://dx.doi.org/10.2118/4140-PA.
  9. Chu, C. 1990. Prediction of Steamflood Performance in Heavy Oil Reservoirs Using Correlations Developed by Factorial Design Method. Paper SPE 20020 presented at the SPE California Regional Meeting, Ventura, California, USA, 4–6 April. http://dx.doi.org/10.2118/20020-MS.
  10. 10.0 10.1 Damsleth, E., Hage, A., and Volden, R. 1992. Maximum Information at Minimum Cost: A North Sea Field Development Study With an Experimental Design. J Pet Technol 44 (12): 1350–1356. SPE23139-PA. http://dx.doi.org/10.2118/23139-PA.
  11. 11.0 11.1 Wolff, M. 2010. Probabilistic Subsurface Forecasting. https://www.onepetro.org/general/SPE-132957-MS
  12. Williams, G.J.J., Mansfield, M., MacDonald, D.G., and Bush, M.D. 2004. Top-Down Reservoir Modeling. Paper SPE 89974 presented at the SPE Annual Technical Conference and Exhibition, Houston, 26–29 September. http://dx.doi.org/10.2118/89974-MS.
  13. Landa, J.L. and Güyagüler, B. 2003. A Methodology for History Matching and the Assessment of Uncertainties Associated With Flow Prediction. Paper SPE 84465 presented at the SPE Annual Technical Conference and Exhibition, Denver, 5–8 October. http://dx.doi.org/10.2118/84465-MS.

Noteworthy papers in OnePetro

Vogel, L.C. 1956. A Method for Analyzing Multiple Factor Experiments—Its Application to a Study of Gun Perforating Methods. Paper SPE 727-G presented at the Fall Meeting of the Petroleum Branch of AIME, Los Angeles, 14–17 October. http://dx.doi.org/10.2118/727-G.

Noteworthy books

Society of Petroleum Engineers (U.S.). 2011. Production forecasting. Richardson, Tex: Society of Petroleum Engineers. WorldCat or SPE Bookstore

External links

Production forecasts and reserves estimates in unconventional resources. Society of Petroleum Engineers. http://www.spe.org/training/courses/FPE.php

Production Forecasts and Reserves Estimates in Unconventional Resources. Society of Petroleum Engineers. http://www.spe.org/training/courses/FPE1.php

Back to top

Page champions

Martin Wolff - Oxy

See also

Production forecasting glossary

Aggregation of forecasts

Challenging the current barriers to forecast improvement

Commercial and economic assumptions in production forecasting

Controllable verses non controllable forecast factors

Discounting and risking in production forecasting

Documentation and reporting in production forecasting

Empirical methods in production forecasting

Establishing input for production forecasting

Integrated asset modelling in production forecasting

Long term verses short term production forecast

Look backs and forecast verification

Material balance models in production forecasting

Probabilistic verses deterministic in production forecasting

Production forecasting activity scheduling

Production forecasting analog methods

Production forecasting building blocks

Production forecasting decline curve analysis

Production forecasting expectations

Production forecasting flowchart

Production forecasting frequently asked questions and examples

Production forecasting in the financial markets

Production forecasting principles and definition

Production forecasting purpose

Production forecasting system constraints

Quality assurance in forecast

Reservoir simulation models in production forecasting

Types of decline analysis in production forecasting

Uncertainty analysis in creating production forecast

Uncertainty range in production forecasting

Using multiple methodologies in production forecasting

Category