You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information


PEH:Risk and Decision Analysis

PetroWiki
Revision as of 16:50, 26 April 2017 by Denise Watts (Denisewatts) (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Publication Information

Vol6EPTCover.png

Petroleum Engineering Handbook

Larry W. Lake, Editor-in-Chief

Volume VI – Emerging and Peripheral Technologies

H.R. Warner Jr., Editor

Chapter 10 – Risk and Decision Analysis

By James A. Murtha, SPE, Consultant, Susan K. Peterson, SPE, Consultant, and Wilton T. Adams, Consultant

Pgs. 453-559

ISBN 978-1-55563-122-2
Get permission for reuse

Introduction

The oil and gas industry invests money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.

An interest in quantifying risk and formalizing complex decisions requires a review of the methods available. While what is presented here is not exhaustive, it serves as a starting point for the engineer or geoscientist interested in risk analysis.

Risk and decision analysis software is as diverse as the analysis methods themselves. There are programs to do Monte Carlo simulation and decision tree analysis. Analytic models to do economics can be linked to both Monte Carlo simulation and decision trees. Closely related are optimization, sensitivity analysis, and influence diagrams. Extending further, we encounter forecasting, expert systems, and fuzzy logic. Within geoscientists' purview are mapping packages and geostatistics software, both of which have the potential to offer strong support to the analysis of uncertainty.

Our focus is with the two primary uncertainty methods: Monte Carlo simulation and decision trees, along with the review of fundamentals of probability and statistics language necessary to carry out analysis and present results.

Contents

Overview


Historical Perspective

Uncertainty analysis evolved during the latter half of the 20th century. Its underpinnings in statistics and probability were in place by 1900. Problem solving, especially in industrial engineering and operations research, was introduced in midcentury, following more theoretical modeling in physics, chemistry, and mathematics in the early 1900s. The computer revolution, and in particular the availability of desktop computers and spreadsheet programs in the 1980s and 1990s, supplied the final ingredient.

Of course, there had to be motivation and hard problems to solve. Oil/gas companies became more technical, and competition for funds demanded analysis of profitability. Numerical simulation methods such as reservoir and geostatistical models became established tools, making it easier to argue for Monte Carlo and decision tree tools. Sec. 10.2 presents a more complete discussion of the historical perspective of risk analysis.

Language of Risk Analysis and Decision Making

Any description of Monte Carlo simulation and decision trees must devote some time to the underpinnings of statistics and probability. Undergraduate engineering programs sometimes include one course in statistics, and graduate programs often require one. Unfortunately, what engineers take away from those classes does not always prepare them to deal with uncertainty analysis. For whatever reason, engineers do not gain a level of comfort with the language nor see immediate use for it in their jobs.

Sec. 10.3 introduces the concepts of central tendency (mean, mode, and median), dispersion (standard deviation, ranges, and confidence intervals), and skewness, as well as the graphical tools (histograms, density functions, and cumulative distributions) necessary to communicate ideas of uncertainty about a single variable. Correlation and regression, especially the former, serve to describe the relationship between two parameters. We use Excel to illustrate these descriptive statistics.

This section clarifies what it means to fit historical data. The premise is that we usually have a small sample taken from a huge population, which we wish to describe. The process begins by constructing a histogram from the data and then seeking a density function that resembles the histogram. This statistical tool contrasts sharply with the well-known linear regression, in spite of the fact that their metrics to judge the goodness of fit appear similar.

Three common distribution types—normal, log-normal, and binomial—are discussed at length to assist users in choosing an appropriate type when building a model. The central limit theorem establishes guidelines about sums and products of distributions. A cousin of statistics, probability theory, paves the way to introduce Bayes' theorem, which is invoked in prospect evaluation to ensure consistent logic for revising probabilities.

The Tools of the Trade

Sec. 10.4 is the heart of this chapter. Monte Carlo simulation and decision trees are defined and illustrated, compared and contrasted. Some problems yield to one or the other of these tools. Occasionally, both methods can serve a useful purpose. Decision trees are visual. Their impact diminishes as the model becomes larger and more complex. Decision trees rely on expected value, but decision makers do not always do the same, which brings about the notion of utility functions. Decision trees have their unique form of sensitivity analysis, limited to tweaking one or two variables at a time. Solutions to decision trees consist of a recommended path or choice of action and an associated expected value.

Monte Carlo models do not result in a recommended course of action. Rather they make estimates, providing ranges rather than single values like deterministic models. Their scope is broad, ranging from simple estimates of oil and/or gas reserves with volumetric formulas to full-scale field development. These models and the subsequent analysis and presentation show the wide range of possible outcomes and the probability of each.

Typical Applications of Technologies

Monte Carlo simulation models include capital costs, reserve estimates, production forecasts, and cash flow. One application of each type is discussed in enough detail in Sec. 10.5 so that one can build a model on his own computer. The decision-tree model presented in detail represents a "value of information" problem.

Engineering and Geoscientific Issues

Among the issues raised by practitioners of risk analysis are "Why should we be doing this?" and "Now that we are doing it, are we doing it right?" Both of these questions are addressed by identifying pitfalls of deterministic models (to see why we should migrate toward probabilistic methods) and pitfalls of probabilistic models (to see how we might go astray here). These two topics set the tone for Sec. 10.6.

New practitioners and managers to whom risk-analysis results are presented share a set of concerns including data availability and usefulness, appropriate level of detail, the impact of correlation, and the impact of distribution type. The last two represent general sensitivity analysis. That is, we should always be curious in the sense of "What if we change this aspect of the model?"

Other matters discussed in Sec. 10.6 include modeling rare events, software availability, and sensible corporate policies. We end the section with a brief summary of the ongoing efforts to establish reserve definitions.

Design of Uncertainty Models

A proper start in risk analysis requires investing time in the design of a model. Sec. 10.7 steps through the principal components of a Monte Carlo model: explicit equations and assumptions, a list of key input distributions, sensible selection of outputs (not too many, not too few), using correlation among inputs, early screening of key variables through sensitivity analysis, and laying the groundwork for an effective presentation.

Estimated Future of Technology for the Next Decade

Some trends are easy to project: faster and bigger-capacity computers, more and better applications of the basic tools, continued efforts to incorporate uncertainty-analysis techniques in organizations, and a growing literature to clarify and validate key ideas and to give voice to controversial topics. Other aspects of future development are less predictable. We will witness more competition and sophistication in commercial software. There will be efforts to synthesize and combine tools, especially linking methods like reservoir simulation and geostatistics to uncertainty methods. The most recent entries—real options and portfolio optimization—will undoubtedly make headway, but in what form we do not yet know.

There may be more demands for accountability: can we justify the cost and time of implementing uncertainty methods? On the other hand, this question may be resolved the way its counterpart about the availability of desktop computers was handled: everyone is doing it; our employees expect it; it is difficult to quantify the benefits, but in the end, we must do it. The oil/gas industry has a habit of following the leaders. The momentum has already picked up for risk and decision analysis. It is likely to be more widely used in the future.

Historical Perspective


Risk analysis and decision-making theory and techniques developed during the second half of the 1900s from roots of statistics, operations research, and engineering models, and then matured during the 1975 to 2000 period by expanding from early applications that focused predominantly on reserve estimation.

The material in the sections that follow this historical perspective illustrate the breadth of applications of the subject, ultimately leading to high-level management decisions about new investment opportunities and portfolio optimization.

Origins

Risk analysis did not simply spring forth in full bloom in the mid 20th century. Among its progenitors were the 17th- and 18th- century origins of probability theory in the context of games of chance, probability, and statistics formalism from the late 19th century; the problem-solving and modeling interests that led to operations research, industrial engineering, and general applied mathematics; and the more technical side of business and economics. Although some notable contributions to probability and statistics appeared much earlier (Cardano, Galileo, Gauss, Fermat, the Bernoulis, De Moivre, Bayes), not until the end of the 19th century did statistics become formalized with pioneers like Galton (percentiles, eugenics), Pearson (chi-square test, standard deviation, skewness, correlation) and Spearman (rank correlation, applications in social sciences). The Royal Statistical Society was founded in 1834, The American Statistical Association in 1839, the Statistics Sweden in 1858, and La Société de Statistique de Paris (SSP) in 1860.

During the early and mid-19th century, statistics focused on population. Statistics was a mature science by the early 20th century, though the field has advanced mightily since then. Gossett introduced the t-distribution in 1908. R.A. Fisher invented experimental design, selected 5% as the standard "low level of significance," introduced terms such as "parameter" and "statistic" to the literature, solved problems in distribution theory that were blocking further progress, and invented formal statistical methods for analyzing experimental data. More recent contributions have come from John Tukey[1] (stem and leaf diagram, the terms "bit" and "software") and Edward Tufte[2] (visual presentation of statistics and data).

Deterministic, Analytical, and Monte Carlo Models

The roots of Monte Carlo simulation [the name of which was coined by researchers at Los Alamos Natl. Laboratory (U.S.)] were in theoretical statistics, but its applicability to a spectrum of practical problems accounts for its popularity. The term Monte Carlo, as applied to uncertainty analysis, was introduced by von Neumann, Metropolis, and Ulam at Los Alamos National Laboratory around 1940. Hertz published his classic article[3] in 1964. A couple of years later, Paul Newendorp began teaching classes on "petroleum exploration economics and risk analysis," out of which evolved the first edition of his text[4] in 1975, the same year as McCray[5] and two years before Megill[6] wrote their books on the subject. Ten years later there was commercial software available to do Monte Carlo simulation.

To appreciate a Monte Carlo model, we must first discuss deterministic and analytical models. It now may seem natural to recognize the uncertainty implicit in so many of the variables we estimate, but the early models from engineering, physics, and mathematics were deterministic: all inputs—the so-called independent variables—and hence the outputs, or dependent variable(s), were fixed values. There was no uncertainty. Thus, any Excel worksheet with at least one cell containing a formula that references other cells in order to calculate a result is a deterministic model. The operative word was "calculate," not "estimate." We calculated the velocity of a falling object 5 seconds after it was propelled upward with (initial) velocity of 100 ft/sec at 46° from an initial position of 500 ft above the ground, ignoring air resistance (113 ft/sec at 322°, 347 ft downrange and 458 ft high). We calculated the time for light to travel from the sun to the Earth (8 minutes 19 seconds at the equinoxes). We used calculus to calculate the optimal order quantity that would minimize total cost—ordering plus storage plus stockout—for inventory models. We found the regression line that minimized the sum of squared residuals for a crossplot. Found elsewhere in this Handbook are numerous examples of deterministic models used in the petroleum industry.

Introducing uncertainty amounts to replacing one or more input values with a range of possible values, or more properly, a distribution. This leads us to two classes of models, the Monte Carlo models, which are a central topic in this chapter, and another class called analytical models, which we discuss briefly here.

The analytical model can be thought of as lying between deterministic models and numerical simulation. In an analytical model, the inputs might be represented as probability distributions, and the outputs are also probability distributions. But, unlike a Monte Carlo simulation, we find the output by a formula. For instance, one can show that if we add two normal distributions having means 10 and 15 and standard deviations 5 and 4, respectively, and if these two inputs are independent, then the sum is a normal distribution with a mean of 25 and a standard deviation of √41. In general, for independent distributions, the sum of the means is the mean of the sum, and the sum of the variances is the variance of the sum. Things get complicated fast as our models get more complex algebraically, as we include dependence relationships and more exotic distribution types. Nonetheless, some work has been done combining probability distributions with formulas. [7]

Decision trees had their roots in business schools. They lie somewhere between deterministic and probabilistic models. They incorporate uncertainty in both estimates of the chance that something will happen and a range (more properly a list) of consequences. Thus, they are probabilistic. The solution, however, is a single number and a unique path to follow. Moreover, the sensitivity analysis for decision trees, which adds credibility to the model, is often ignored in papers and presentations and is quite limited in its scope compared to Monte Carlo simulation. See Sec. 10.4 for a detailed comparison of these two techniques.

Early Emphasis on Reserves/Later Cost and Value

Throughout the latter quarter of the 20th century, the oil/gas industry gradually adopted methods of uncertainty analysis, specifically decision trees and Monte Carlo simulation. A good indication of this change is the fact that the 60-page index of the 1,727-page, 1989 printing of the Petroleum Engineering Handbook[8] contained only one reference to "risk [factor]" in an article about property evaluation.

Much of the early Monte Carlo simulation and decision tree work in the oil/gas industry focused on estimating reserves and resources. Industry courses sponsored by the American Association of Petroleum Geologists (AAPG) and Society of Petroleum Engineers (SPE) often emphasized exploration. Oddly, cost models and production forecasting were often given short shrift or treated trivially. By the early 1990s, however, while Wall Street was hyping hedges and both companies and individuals were wondering about optimizing their portfolios, several companies began marketing probabilistic cash flow models for the petroleum industry.

In the mid- to late 1990s, people began to build probabilistic models for prices of oil/gas rather than simply assume three simplistic deterministic forecasts (base, optimistic, and pessimistic). The half dozen or so competing cash flow models in the petroleum industry began including some form of uncertainty analysis as optional features in their software.

During the late 1990s, SPE began an intensive dialog about probabilistic reserves definitions. SPE's most popular workshop on this topic was convened in several cities over a two-year period, often drawing hundreds of attendees. Technical interest groups (TIGs) engaged in lengthy discussions about terminology. A full discussion of reserves models, both probabilistic and deterministic, may be found in this Handbook in the chapter on reserves.

Finally, by 2000, pioneers were promoting portfolio optimization and real options, both of which acknowledge volatility of prices. For a sense of history of the subject of uncertainty in the oil/gas industry, consider reading these publications. [3][4][5][6][9][10][11][12]

Language of Risk Analysis and Decision Making

Descriptive Statistics

Descriptive statistics should aid communication. As the name suggests, it is intended to develop and explain features of data or of probability distributions. We begin the discussion with data, perhaps visualized as collections of numbers expressing possible values of some set of variables; but it is common practice to extend the language to three common types of graphs used to relate variables to probability: histograms, probability density functions, and cumulative distributions. That is, we habitually use the same words (mean, median, standard deviation, and so on) in the context of data as well as for these graphs. In so doing, we create an additional opportunity for miscommunication. Thus, although we require only a few words and phrases from a lexicon of statistics and probability, it is essential that we use them carefully.

There is an unspoken objective when we start with data: we imagine the data as a sample from some abstract population, and we wish to describe the population. Thus, we use simple algebraic formulas to obtain various statistics or descriptors of the data, in hopes of inferring what the underlying population (i.e., reality, nature, the truth) might look like. Consider, for example, the database in Table 10.1 for 26 shallow gas wells in a given field: pay thickness, porosity, reservoir temperature, initial pressure, water saturation, and estimated ultimate recovery (EUR). We can use various functions in Excel to describe this set of data. The "underlying populations" in this case would refer to the corresponding data for all the wells we could drill in the field. We concentrate on the porosities, but one may substitute any of the other six parameters.

Measures of Central Tendency

Our first group of statistics helps us find "typical" values of the data, called measures of central tendency. Let us calculate the three common ones.

Mean. Sum the 26 values and divide by 26 (nicknames: arithmetic mean, expected value, average, arithmetic average). The Excel name is "AVERAGE." The mean porosity is 0.127.

Median. First sort the 26 values in ascending order and take the average of the two middle values (13th and 14th numbers in the ascending list). For an odd number of data, the median is the middle value once sorted (nickname: P50). P50 is not the probability at the 50th percentile. P50 is a value on the other axis. The Excel function is "MEDIAN." The median porosity is 0.120. The rule works regardless of repeated data values.

Mode. Find the number that repeats most often. In case of a tie, report all tied values. The Excel function is "MODE." Because Excel reports only one number in case of a tie, namely the one that appears first in the list as entered in a column or row, note that it reports the mode of the porosity data as 0.100 because that value appeared five times, even though the value 0.120 also appeared five times.

We therefore see that the mode is ambiguous. Rather than one number, there may be several. In Excel we get a different value by simply recording the data in a different order. This situation can be confusing. Fortunately, we seldom use the mode of data in a serious way because what we care about is the underlying population's mode, of which the data's mode, please note, is generally not a good estimator. Later, we present a way to fit a theoretical curve to data and then find the (unique) mode of the fitted curve, a relatively unambiguous process, except for the possibility of competing curves with slightly different modes.

These three values—mean, median, and mode—are referred to as measures of central tendency. Each one's reaction to changes in the data set determines when it is used. The mean is influenced by the extreme values, whereas the median and mode are not. Thus, one or more very large values cause the mean to drift toward those values. Changing the largest or smallest values does not affect the median (and seldom the mode) but would alter the mean. The mode and median are insensitive to data perturbations because they are based more on the rank, or order, of the numbers rather than the values themselves.

The median is often used to report salaries and house prices, allowing people to see where they fit relative to the "middle." Newspapers report housing prices in major cities periodically. For instance, Table 10.2 appeared in the Houston Chronicle to compare prices in five major cities in Texas. The mean values are roughly 20% larger than the median, reflecting the influence of a relatively small number of very expensive houses.

Measures of Dispersion and Symmetry

The next group of statistics describes how the data are dispersed or spread out from the "center." That is, the degree of data dispersion advises us how well our chosen measure of central tendency does indeed represent the data and, by extension, how much we can trust it to describe the underlying population.

Population Variance. The average of the squared deviations from the mean:

RTENOTITLE

Population Standard Deviation. Population standard deviation is the square root of population variance (in Excel, this is STDEVP).

Sample Standard Deviation. Sample standard deviation is the square root of sample variance (in Excel, this is STDEV).

All this variety is necessary because of the implicit objective of trying to describe the underlying population, not just the sample data. It can be shown that VAR and STDEV are better estimators of the actual population's values of these statistics.

Of all these measures, STDEV is the most commonly used. What does it signify? The answer depends to some degree on the situation but, in general, the larger the standard deviation, the more the data are spread out. Consider the 26 wells in Table 10.1. Their descriptive statistics appear in Table 10.3. In particular, the STDEVs of porosity and EUR are respectively 0.033 and 494. But we never simply look at STDEV without referencing the mean of the same data. A better method of comparing dispersion is to look at the "coefficient of variation," which, unlike the other measures, is dimensionless.

Coefficient of Variation. The coefficient of variation = STDEV/mean. Thus, porosity and EUR are quite different in this regard; their respective coefficients of variation are 0.26 and 0.78. Temperature has an even smaller dispersion, with a coefficient of variation of 0.07.

Skewness. Skewness is the next level of description for data. It measures the lack of symmetry in the data. While there are many formulas in the literature, the formula in Excel is

RTENOTITLE

A symmetric data set would have a mean, m, and for each point x smaller than m, there would be one and only one point x′ larger than m with the property that x′ – m = mx. Such a set would have SKEW = 0. Otherwise, a data set is skewed right or left depending on whether it includes some points much larger than the mean (positive, skewed right), or much smaller (negative, skewed left). To help us understand skewness, we must introduce some graphs.

Historigrams, Random Variables, and Probability Distributions (Density Functions and Cumulative Distribution Functions)

A histogram is formed by splitting the data into classes (also called bins or groups) of equal width, counting the number of data that fall into each class (the class frequency, which of course becomes a probability when divided by the total number of data), and building a column chart in which the classes determine the column widths and the frequency determines their heights. The porosity data in Table 10.1 yield the histogram in Fig. 10.1.

Three more histograms, generated with Monte Carlo simulation software, show the three cases of skewness (symmetrical, right skewed, and left skewed). See Figs. 10.2 through 10.4.

Whereas histograms arise from data, probability density functions are graphs of variables expressed as theoretical or idealized curves based on formulas. Four common density functions are the normal, log-normal, triangular, and beta distributions. Figs. 10.5 through 10.8 show several of these curves.

The formulas behind these curves often involve the exponential function. For example, the formula for a normal distribution with mean, μ, and standard deviation, σ, is

RTENOTITLE....................(10.1)

And the log-normal curve with mean, μ, and standard deviation, σ, has the formula

RTENOTITLE....................(10.2)

where

RTENOTITLE....................(10.3)

and

RTENOTITLE....................(10.4)

The single rule for a probability density function is that the area under the curve equals 1.00 exactly.

To each density function y = f(x), there corresponds a cumulative distribution function y = F(x) obtained by taking the indefinite integral of f. Because the area under f is 1, the cumulative function ranges monotonically from 0 to 1.

Figs. 10.5 through 10.8 also show the figures of the cumulative functions corresponding to the density functions. The variable, X, on the horizontal axis of a density function (or the associated cumulative graph) is called a random variable.

In practice, when we attempt to estimate a variable by assigning a range of possible values, we are in effect defining a random variable. Properly speaking, we have been discussing one of the two major classes of random variables—continuous ones. Shortly, we introduce the notion of a discrete random variable, for which we have histograms, density functions, cumulative curves, and the interpretations of the statistics we have defined.

The reason to have both density functions and cumulative functions is that density functions help us identify the mode, the range, and the symmetry (or asymmetry) of a random variable. Cumulative functions help us determine the chance that the variable will or will not exceed some value or fall between two values, namely of P(X < c), P(X > c), and P(c < X < d), where c and d are real numbers in the range of x. In practice, cumulative functions answer questions like: "What is the chance that the discovered reserves will exceed 100 million bbl? What is the chance of losing money on this investment [i.e., What is P(NPV < 0)?]? How likely is it that the well will be drilled in less than 65 days?

Curve Fitting: The Relationship Between Histograms and Density Functions

As mentioned earlier, the implied objective of data analysis is often to confirm and characterize an underlying distribution from which the given data could reasonably have been drawn. Today, we enjoy a choice of software that, when supplied a given histogram, fits various probability distributions (normal, log-normal, beta, triangular) to it. A common metric to judge the "goodness of fit" of these distributions to the histogram is the "chi-square" value, which is obtained by summing the normalized squared errors.

RTENOTITLE....................(10.5)

where hi is the height of the histogram, and yi is the height (y -value) of the fitted curve. The curve that yields a minimum chi-square value is considered as the best fit. Thus, we begin with data, construct a histogram, then find the best fitting curve, and assume that this curve represents the population from whence the data came.

We do this because when we build a Monte Carlo simulation model, we want to sample hundreds or thousands of values from this imputed population and then, in accordance with our model (i.e., a formula), combine them with samples of other variables from their parent populations. For instance, one way to estimate oil in place for an undrilled prospect is to use analogous data for net volume, porosity, oil saturation, and formation volume factor; fit a curve to each data set; then sample a single value for each of these four variables and take their product. This gives one possible value for the oil in place. We then repeat this process a thousand times and generate a histogram of our results to represent the possible oil in place. Now that we have these graphical interpretations, we should extend our definitions of mean, mode, median, and standard deviation to them.

For histograms, although there are definitions that use the groupings, the best way to estimate the mean and median is simply to find those of the histogram's original data. The mode of a histogram is generally defined to be the midpoint of the class having the highest frequency (the so-called modal class). In case of a tie, when the two classes are adjacent, we use the common boundary for the mode. When the two classes are not adjacent, we say the data or the histogram is bimodal. One can have a multimodal data set.

One problem with this definition of mode for a histogram is that it is a function of the number of classes. That is, if we rebuild the histogram with a different number of classes, the modal class will move, as will the mode. It turns out that when we fit a curve to a histogram (i.e., fit a curve to data via a histogram), the best-fitting curve gives us a relatively unambiguous value of a mode. In practice, although changing the number of classes could result in a different curve fit, the change tends to be small. Choosing another type of curve (say, beta rather than a triangular) would change the mode also. Nevertheless, this definition of mode (the one from the best curve fit) is adequate for most purposes.

Interpreting Statistics for Density Functions. The mode of a density function is the value where the curve reaches its maximum. This definition is clear and useful. The median of a density function is the value that divides the area under the curve into equal pieces. That is, the median, or P50, represents a value M for which a sample is equally likely to be less than M and greater than M.

The mean of a density function corresponds to the x coordinate of the centroid of the two-dimensional (2D) region bounded by the curve and the X-axis. This definition, while unambiguous, is hard to explain and not easy to implement. That is, two people might easily disagree on the location of the mean of a density function. Fig. 10.9 shows these three measures of central tendency on a log-normal density function.

Interpreting Statistics for Cumulative Distributions.'''' The only obvious statistic for a cumulative function is the median, which is where the curve crosses the horizontal grid line determined by the 0.5 on the vertical axis. While the mode is also the point of inflection, this is hard to find. The mean has no interpretation in this context.

Table 10.4 summarizes the interpretations of these central tendency measures for the four contexts: data, histograms, density functions, and cumulative curves. Table 10.5 shows the calculated average, standard deviation, and coefficient of variability for each of five data sets.

Kurtosis, a Fourth-Order Statistic. Kurtosis is defined in terms of 4th powers of (xm), continuing the progression that defines mean, standard deviation, and skewness. Although widely used by statisticians and used to some degree by geoscientists, this statistic, which measures peakedness, is not discussed here because it plays no active role in the risk analysis methods currently used in the oil/gas industry.

Percentiles and Confidence Intervals. The nth percentile is the value on the X- (value) axis corresponding to x on the Y- (cumulative probability) axis. We denote it Px.

A C-percent confidence interval (also called probability or certainty intervals) is obtained by removing (100 – C)/2% from each end of the range of a distribution. Thus, we have an 80% confidence interval that ranges from P10 to P90 and a 90% confidence interval that ranges from P5 to P95. Some companies prefer one or the other of these confidence intervals as a practical range of possible outcomes when modeling an investment or when estimating reserves, cost, or time.

When To Use a Given Distribution. One of the challenges to someone building a model is to decide which distribution to use to represent a given parameter. While there are only very few hard and fast rules (although some people are vocal in their support of particular distributions), the following list provides guidelines for using some common distributions.

Normal Distributions. Normal distributions are often used to represent variables, which themselves are sums (aggregations) or averages of other variables. (See central limit theorem discussion.) Four popular applications are:

  • Field production, which is a sum of production from various wells.
  • Reserves for a business unit, which are sums of reserves from various fields.
  • Total cost, which is a sum of line-item costs.
  • Average porosity over a given structure.


Normal distributions are also used to characterize:

  • Errors in measurement (temperature and pressure).
  • People's heights.
  • Time to complete simple activities.


Samples of normal distributions should inherit the symmetry of their parent, which provides a simple check on samples suspected to come from an underlying normal distribution: calculate the mean, median, and skew. The mean and median should be about the same; the skew should be approximately zero.

Log-Normal Distributions. The log-normal distribution is very popular in the oil/gas industry, partly because it arises in calculating resources and reserves. By definition, X is log-normal if Ln(X) is normal. It follows from the central limit theorem (discussed later) that products are approximately log-normal. If Y = X1 × X2 ×...× XN , then Ln (Y) = Ln (X1) + Ln (X2) +..., which, being a sum of distributions, is approximately normal, making Y approximately log-normal. Common examples of log-normal distributions include:

  • Areas (of structures in a play).
  • Volumes (of resources by taking products of volumes, porosity, saturation, etc.).
  • Production rates (from Darcy's equation).
  • Time to reach pseudosteady state (a product formula involving permeability, compressibility, viscosity, distance, etc.).


Other examples of variables often modeled with log-normal distributions are permeability, time to complete complex tasks, new home prices, annual incomes within a corporation, and ratios of prices for a commodity in successive time periods.

A simple test for log-normality for data is to take the logs of the data and see if they form a symmetric histogram. Bear in mind that log-normal distributions are always skewed right and have a natural range from 0 to infinity. In recent years, a modified (three parameter) log-normal has been introduced that can be skewed right or left, but this distribution has not yet become widely used.

Triangular Distributions. Triangular distributions are widely used by people who simply want to describe a variable by its range and mode (minimum, maximum, and most likely values). Triangular distributions may be symmetric or skewed left or right, depending on the mode's location; and the minimum and maximum have no (zero) chance of occurring.

Some argue that triangular distributions are artificial and do not appear in nature, but they are unambiguous, understandable, and easy to define when working with experts. Beyond that, however, triangular distributions have other advantages. First, though "artificial," they can nevertheless be quite accurate (remember, any distribution only imitates reality). Again, when one proceeds to combine the triangular distributions for a number of variables, the results tend toward the normal or log-normal distributions, preferred by purists. Finally, the extra effort in defining more "natural" distributions for the input variables is largely wasted when the outcome does not clearly reflect the difference.

Discrete Distributions. A continuous distribution has the property that for any two values, a and b, which may be sampled, the entire range between a and b are eligible for samples as well. A discrete distribution, by contrast, is specified by a set of X-values, {x1, x2, x3,...} (which could be countably infinite), together with their corresponding probabilities, {p1, p2, p3,...}. The most used discrete distributions are the binomial distribution, the general discrete distribution, and the Poisson distribution.

Let Y = X1 + X2, +...+ Xn, and Z = Y/n, where X1, X2, ... Xn are independent, identical random variables each with mean μ and standard deviation σ. Then, both Y and Z are approximately normally distributed, the respective means of Y and Z are and μ, and the respective standard deviations are approximately √ and σ/√n.

This approximation improves as n increases. Note that this says the coefficient of variation, the ratio of standard deviation to mean, shrinks by a factor of √n. Even if X1, X2,... Xn are not identical or independent, the result is still approximately true. Adding distributions results in a distribution that is approximately normal, even if the summands are not symmetric; the mean of Y equals the sum of the means of the Xi (exactly); and the standard deviation of Y is approximately 1/√n times the sum of the standard deviations of the Xi and, thus, the coefficient of variation diminishes.

When is the approximation poor? Two conditions retard this process: a few dominant distributions and/or strong correlation among two or more of the inputs. Some illustrations may help.

For instance, take 10 identical log-normal distributions, each having mean 100 and standard deviation 40 (thus, with coefficient of variation, CV, of 0.40). The sum of these distributions has mean 1,000 and standard deviation 131.4, so CV = 0.131, which is very close to 0.40/sqrt(10) or 0.127.

On the other hand, if we replace three of the summands with more dominant distributions, say each having a mean of 1,000 and varying standard deviations of 250, 300, and 400, then the sum has a mean of 3,700 and standard deviation 560, yielding a CV of 0.15. As one might expect, the sum of standard deviations divided by sqrt(10) is 389—not very close to the actual standard deviation. It makes more sense to divide the sum by sqrt(3), acknowledging the dominance of three of the summands. As one can find by Monte Carlo simulation, however, even in this case, the sum is still reasonably symmetric. The practical implications of this theorem are numerous and noteworthy.

Total cost is a distribution with a much smaller uncertainty than the component line items. Adding the most likely costs for each line item often results in a value much too low to be used as a base estimate.

Business unit reserves have a relatively narrow range compared to field reserves. Average porosity, average saturations, average net pay for a given structure area tend to be best represented by normal distributions, not the log-normal distributions conventionally used.

Laws of Probability

Probability theory is the cousin of statistics. Courses in probability are generally offered in the mathematics department of universities, whereas courses in statistics may be offered in several departments, acknowledging the wide variety of applications.

Our interest in probability stems from the following items we must estimate:

  • The probability of success of a geological prospect.
  • The probability of the success of prospect B, once we know that prospect A was successful.
  • The probabilities of various outcomes when we have a discovery (for example, the chance of the field being large, medium, or small in volume).


While much of our application of the laws of probability are with decision trees, the notion of a discrete variable requires that we define probability. For any event, A, we use the notation P(A) (read "the probability of A") to indicate a number between 0 and 1 that represents how likely it is that A will occur. Lest this sound too abstract, consider these facts:

  • A = the occurrence of two heads when we toss two fair coins (or toss one fair coin twice); P(A) = ¼.
  • A = the occurrence of drawing a red jack from a poker deck; P(A) = 2/52.
  • A = the occurrence some time next year of a tropical storm similar to the one in Houston in July 2001; P(A) = 1/500.
  • A = the probability that, in a group of 25 people, at least two of them share a birthday; P(A) = 1/2, approximately.
  • A = the probability that the sun will not rise tomorrow; P(A) = 0.


The numbers come from different sources. Take the red jack example. There are 52 cards in a poker deck (excluding the jokers), two of which are red jacks. We simply take the ratio for the probability. Such a method is called a counting technique.

Similarly, when we toss two fair coins, we know that there are four outcomes, and we believe that they are "equally likely," for that is indeed what we mean by a fair coin. The Houston storm of July 2001 recorded as much as 34 in. of rain in a two- or three-day span, flooded several sections of highway (enough to float dozens of tractor-trailers), and drove thousands of families from their homes. Meteorologists, who have methods of assessing such things, said it was a "500-year flood."

Most believe that it is certain that the sun will rise tomorrow (the alternative is not clear) and would, therefore, assign a probability of 1.0 to its rising and a probability of 0.0 to its negation (one of the rules of probability).

Sometimes we can count, but often we must estimate. Geologists must estimate the chance that a source rock was available, the conditions were right to create hydrocarbons, the timing and migration path were right for the hydrocarbon to find its way to the reservoir trap, the reservoir was adequately sealed once the hydrocarbons got there, and the reservoir rock is of adequate permeability to allow the hydrocarbons to flow to a wellbore. This complex estimation is done daily with sophisticated models and experienced, highly educated people. We use experience and consensus and, in the end, admit that we are estimating probability.

Rules of Probability. Rule 1: 1 – P(A) = P(-A), the complement of A. Alternately, P(A) + P(-A) = 1.0. This rule says that either A happens or it doesn't. Rule 1′: let A1, A2 , ... be exclusive and exhaustive events, meaning that exactly one of them will happen, then P(A1) + P(A2) +...+ P(An) = 1.

For the next rule, we need a new definition and new notation. We write P(A|B) and say the probability of A knowing B (or "if B" or "given B") to mean the revised probability estimate for A when we assume B is true (i.e., B already happened). B is called the condition. P(A) is called the conditional probability. We write P(A & B) and say the probability that both A and B happen. (This is called the joint probability.) Rule 2: P(A & B) = P(A|B) × P(B). We say A and B are independent if P(A|B) = P(A). Note that when A and B are independent, P(A & B) = P(A) × P(B). Using the fact that A & B means the same as B & A and, thus, interpreting Rule 2 as P(B & A) = P(B|A) × P(A), it follows that P(A|B) × P(A) = P(B|A) × P(A), from which we can deduce. Rule 3: P(B|A) = [P(A|B) P(B)]/ P(A).

The next rule is often paired with Rule 3 and called Bayes' Theorem. [13] Rule 4: given the n mutually exclusive and exhaustive events A1, A2,..., An, then P(B) = P(B&A1) + P(B&A2) + ... P(B&An).

Application of Bayes' Theorem An example application of Bayes' Theorem appeared in Murtha[14] and is presented in Sec. 10.4.

The Tools of the Trade

Introduction

Risk analysis is a term used in many industries, often loosely, but we shall be precise. By risk analysis, we mean applying analytical tools to identify, describe, quantify, and explain uncertainty and its consequences for petroleum industry projects. Typically, there is money involved. Always, we are trying to estimate something of value or cost. Sometimes, but not always, we are trying to choose between competing courses of action.

The tools we use depend on the nature of the problem we are trying to solve. Often when we are choosing between competing alternatives, we turn toward decision trees. When we simply wish to quantify the risk or the uncertainty, the tool of choice is Monte Carlo simulation. These tools, decision trees and Monte Carlo simulation, are the two main tools described in this chapter. We show proper applications of each as well as some misuses.

We rely on two more basic methods, descriptive statistics and data analysis, to communicate and to assist us in actually applying decision trees and Monte Carlo simulation. There are also more advanced tools such as risk optimization, which combines classical optimization with Monte Carlo simulation, and risk decision trees, which blend our two main tools.

Decision Trees

A decision tree is a visual model consisting of nodes and branches, such as Fig. 10.10, explained in detail later in this chapter. For now, observe that it grows from left to right, beginning with a root decision node (square, also called a choice node) the branches of which represent two or more competing options available to the decision makers. At the end of these initial branches, there is either an end node (triangle, also called a value node) or an uncertainty node (circle, also called a chance node). The end node represents a fixed value. The circle's branches represent the possible outcomes along with their respective probabilities (which sum to 1.0). Beyond these initial uncertainty nodes' branches, there may be more squares and more circles, which generally alternate until each path terminates in an end node.

The idea is to describe several possible paths representing deliberate actions or choices, followed by events with different chances of occurrence. The actions are within the control of the decision-makers, but the events are not. By assigning probabilities and values along the way, we can evaluate each path to select an optimal path. The evaluation is simple, consisting of alternating between calculating weighted averages or expected values at each circle, then choosing the best action from each square. Ultimately, we obtain a value for the root node. The solution to the decision tree consists in this pairing of root value and optimal path.

The numbers at end nodes generally represent either net present value (NPV) or marginal cost—the goal being to either maximize NPV or minimize cost. Thus, the optimal action at each square might be a maximum (for NPV) or a minimum (for cost) of the various branches emanating from that square.

Fig. 10.10 shows a simple decision tree with one choice node and one chance node. The decision tree represents a choice between a safe and a risky investment. Selecting the risky alternative results in a 50% chance of winning $40 and a 50% chance of losing $10. Alternatively, one can be guaranteed $8. We solve the decision tree by first calculating the expected value of the chance node, 0.5 × 40 + 0.5 × (–10) = 15, and then selecting the better of the two alternatives: $15 vs. $8, namely $15. The "correct" path is the risky investment, and its value is $15.

Some would question this logic and say that they prefer the sure thing of $8 to the chance of losing $10. A person who would prefer the guaranteed $8 might also prefer $7 or $6 to the risky investment. Trial and error would reveal some value, say $6, for which that person would be indifferent between the two alternatives. That is, they would be just as happy to have $6 as they would to have the opportunity to flip a fair coin and get paid $40 if heads comes up and lose $10 if tails comes up. In this case, we call $6 the certainty equivalent of the chance. The difference between the actual expected value and the certainty equivalent, in this case $15 – $6 = $9, is called the risk premium, suggesting the price you would pay to mitigate the risk. Pursuing this line of reasoning leads us to the topic of utility functions. [15]

Utility Functions

Suppose you are faced with a risky choice, say whether to drill a prospect or divest yourself of it. If successful, you would then develop a field. If unsuccessful, you would lose the dry hole cost. For simplicity, we imagine the best and worst possible NPV, a loss of $100 million and a gain of $500 million. We proceed to construct a utility function for this investment. For brevity, we denote NPV as V and utility as U. We wish to construct a function, U = f (V). This function maps the range, [–100,500], usually represented on the horizontal axis to the range [0.1] on the vertical axis. Typically, the shape is concave down, like U = log(V), U = √V, or U = 1 – exp(–V/R), where R is a large constant. There is a formal set of rules (axioms) of utility theory from which one can prove certain propositions. A company or an individual willing to obey these axioms can develop and use a utility function for decision-making. Rather than get into the level of detail necessary to discuss the axioms, let us simply construct one utility curve to get the flavor of the process.

First, assign utility U =1 for V = 500, and U = 0 for V = –100. Next, ask for what value V you would be indifferent between V and a 50-50 chance of –100 and 500. Suppose this value happens to be 50. This establishes that U = 0.5 corresponds to V = 50. The reason follows from the axioms of utility theory. 15 Essentially, these axioms allow us to build a decision tree with values, then replace the values with their utility counterparts. So, a decision tree having a sure thing with a choice of 50—a risky choice with a 50% chance of –100, and a 50% chance of 500—would represent an indifferent choice. The corresponding utilities on the risky branch would have an expected utility of 0.5 × 0 + 0.5 × 1 = 0.5 or 1/2.

We now have three points on the utility curve. We obtain a fourth point by asking for a certainty equivalent of the 50–50 chance of –100 and +50. If this value chosen is –40, that would say that U(–25) = 1/4. Next, we ask for a certainty equivalent of the risky choice of 50–50 chance of the values 50 and 500. If this is 150, then U(150) = 3/4. We could continue this process indefinitely, selecting a pair of values the utility of which is known and generating a value the utility of which is halfway between. The resulting table of pairs can be plotted to obtain the utility curve.

In theory, once the utility curve is established, all decisions are based on utility rather than value. So any decision tree we build with value is converted to the corresponding utility-valued tree and solved for maximal utility. The solution yields a path to follow and an expected utility, which can be converted back to a value, namely its certainty equivalent. Finally, the difference between the certainty equivalent and the expected value of the original (value laden) decision tree is called the risk premium. Thus, everything that follows about decision trees could be coupled with utility theory, and the decision trees we build could be converted to ones with utility rather than values. Software can do this effortlessly by specifying a utility function.

Decision Tree Basics and Examples

The expected value is an essential idea not only in decision trees, but throughout risk and decision analysis. Here are some of its interpretations and properties.

Expected Value. Expected value is the long-run average value of the chance; is the probability-weighted average of the end-node values; is a surrogate for the entire chance node; is a function of both the probabilities and the values; has the same units as the end-node values; is usually not equal to one of the end-node values, but always between the minimum and maximum; and provides no information about risk.

Chance Nodes. Any number of branches can emanate from a chance node. Typical decision-tree fragments have two, three, or four branches. As with choice nodes, we often limit the number of branches to three or four through consolidation. Sometimes, there are two or more decision trees that represent the same decision. For instance, consider the choice of playing a game in which you must flip a fair coin exactly twice and for each head you win $10 and for each tail you lose $9. We can represent the game path (as opposed to the choice of "pass" or "do not play") with two consecutive chance nodes or one chance node with either three or four outcomes. See Fig. 10.11. All decision trees are valid. Each one tells a different story of the game.

Choice Nodes. Like chance nodes, choice nodes may have any number of branches, but often, they have two or three. Some simple examples are given next.

Two Choices—No Other Alternatives.

  • Do or do not proceed or delay.
  • Make/buy.
  • Rent/buy.
  • Drill vertical or slant hole.
  • Run 3D seismic or do not.
  • Replace bit or do not.
  • Set pipe or do not.


Three or More Choices.

  • Proceed/stop/delay.
  • Purchase/buy option to purchase/seek advice.
  • Make/buy/rent.
  • Develop/divest/buy information.

Solving a decision tree includes selecting the one branch from each choice node the expected value of which is optimal—namely the largest value when the decision tree values are NPV and the smallest value when the decision tree values are cost. The people involved in constructing a decision tree (sometimes referred to as framing the problem) have the responsibility of including all possible choices for each choice node. In practice, there is a tendency to second guess the solution process and disregard certain choices because they seem dominated by others. Avoid this. In general, the early stages of the decision tree building should be more like a brainstorming session, in which participants are open to all suggestions. Clearly, there must be a balance between the extremes of summarily rejecting a choice and going into too much detail. Experienced leaders can be useful at the problem-framing stage.

Discretization. One of the steps in reducing a Monte Carlo simulation to decision trees involves replacing a continuous distribution with a discrete counterpart. Elsewhere, we describe solutions to estimation problems by Monte Carlo simulation, resulting in an output distribution. Imagine we are trying to characterize the NPV of a field development that can range from an uncertain dry-hole cost through a large range of positive value, depending on several variables such as reserves, capital investment, productivity, oil/gas prices, and operating expenses. Most of us would conduct the analysis with Monte Carlo simulation, but some would prefer to portray the results to management with the help of a decision tree.

Consider the decision tree in Fig. 10.12, which depicts a classic problem of success vs. failure for an exploration well. The failure case ("dry hole") is simple enough, but success is a matter of degree. Yet no one would argue that the four cases listed here are the only actual possibilities. Rather, they are surrogates for ranges of possible outcomes with corresponding probabilities. The four discrete values might have been extracted from a distribution of possible successes. The process of replacing the continuous distribution with discrete values is called discretization.

Suppose we run a Monte Carlo simulation with 1,000 iterations; then examine the database of results in a spreadsheet, sorting the 1,000 values of NPV from small to large, then grouping them into categories, perhaps arbitrarily chosen, called uncommercial, small, medium, large, and giant. Within each category, we take the average value and calculate the fraction of values in that range, namely (number of data)/1,000. These are, respectively, the values and the probabilities entered in the decision tree. Clearly, each value is now a surrogate for some range. We do not really believe that there are only five possible outcomes to the choice of drill.

Conditional Probability and Bayes' Theorem in Decision Trees. Fig. 10.13 shows a simple decision tree to choose between setting pipe and drilling ahead when approaching a zone of possible overpressure. The overpressured zone is a "kick" the probability of occurrence of which is 0.2. The values in this case are cost, so we want to minimize the root node cost.

The values represent estimated costs for three things: setting pipe ($10,000), controlling the overpressure without protection of casing ($100), and with protection ($25,000, including the cost of setting pipe). The expected values of the two chance nodes are 0.2 × 100 + 0.8 × 0 = 20, and 0.2 × 25 + 0.8 × 10 = 13. Therefore, we decide to set pipe at an expected cost of $13,000 rather than drill ahead with an expected cost of $20,000.

When decision trees have a second chance node, the uncertainty nodes that follow it use conditional probability. Thus, in Fig. 10.14, the probabilities for Failure B and Success B are really P(~B|A) and P(B|A) because these events occur after Success A has occurred. Thus, Bayes' Theorem comes into play, and the user must exercise care not to violate the laws of conditional probability, as the following example illustrates. First, we restate this result.

Bayes' Theorem P(B|A) = P(A|B) × P(B)/P(A); P(A) = P(A&B1) + P(A&B2) + ... + P(A&Bn), where B1, B2, ... Bn are mutually exclusive and exhaustive.


Example 10.1: Upgrading a Prospect[14]

Suppose that we believe two prospects are highly dependent on each other because they have a common source and a common potential seal. In particular, suppose P(A) = 0.2, P(B) = 0.1, and P(B|A) = 0.6. This is the type of revised estimate people tend to make when they believe A and B are highly correlated. The success of A "proves" the common uncertainties and makes B much more likely.

However, consider the direct application of Bayes' Theorem: P(A|B) = P(B|A) × P(A)/P(B) = (0.6) × (0.2)/0.1 = 1.2. Because no event, conditional or otherwise, can have a probability exceeding 1.0, we have reached a contradiction that we can blame on the assumptions.

The Problem

When two prospects are highly correlated, they must have similar probabilities; one cannot be twice as probable as the other. Another way of looking at this is to resolve the equations: P(A|B)/P(A) = P(B|A)/P(B), which says that the relative increase in probability is identical for both A and B.

Aside from these precautions, when assigning probabilities to event branches of a decision tree, there is another use of Bayes' Theorem in decision trees, namely the value of information, one of the most important types of applications of decision trees.




Value of Information. We are often faced with a problem of assessing uncertainty (in the form of some state of nature) and its consequences with limited data. When the stakes are high, it may be possible to postpone the decision, invest some resources, and obtain further information (from some sort of diagnostic tool) that would make the decision more informed. Here are some typical states of nature we try to assess.

  • Will a prospect be commercial or noncommercial?
  • Will a target structure have closure or no closure?
  • Will our recent discovery yield a big, medium, or small field?
  • Will we need only one small platform or either one big or two small platforms?
  • Is the oil field a good or a marginal waterflood prospect?
  • Does the zone ahead of the drill bit have abnormal or normal pressure?


Some corresponding types of information are

  • Pilot flood—prospect: good/marginal.
  • 3D seismic—closure: likely/unlikely/can't tell.
  • 3D seismic—hydrocarbon: indicated/not indicated.
  • Well test—productivity: high/moderate/low.
  • Delineation well—platform needs: big/small.
  • Wireline logs—pressure: high/normal/low.





Example 10.2: Value of Information

Given a prospect, you are faced with the choice of drilling for which the geoscientists give a 60% chance of success or divesting. The chance of success is tantamount to the structure being closed, all other chance factors (source, timing and migration, reservoir quality) being very close to 1.0. A member of the team suggests the possibility of acquiring 3D seismic interpretation before proceeding. He does caution, however, that the seismic interpretation, like others in the past, could yield three possible outcomes: closure likely, closure unlikely, and inconclusive. The extended decision tree, shown in Fig. 10.15, incorporates these possibilities. Note how the original decision tree (before considering the third option of acquiring information) would have had only two choices and one chance node.

Additional data necessary to do the problem includes: the mean NPVs of $100 million for the success case, $40 million for the failure case, and $10 million for divesting, along with the sensitivity table (Table 10.6), which indicates how accurate or reliable the 3D interpretation is for this particular context (for a given geographical/geological environment, with data of certain quality and interpretation by a particular individual/company). The interpretation of the table is P(closure likely|closed) = 0.70 = P(A1|B1), as opposed to the possible misinterpretation that the value 0.70 refers to the conditional probability in the opposite direction, P(B1|A1).

One should be curious about the source of this data. The values for success and failure cases and for divestiture are obtained by routine engineering analysis. The sensitivity table must come in part from the expert doing the interpretation. In a perfect world, these estimates would be backed by extensive empirical data. In reality, the best we can do is to estimate the entries and then do sensitivity analysis with our decision tree.

Speaking of perfect, there is a special case worth noting, namely when the information is "perfect," which corresponds to the information in Table 10.7. The entries in the lower left and upper right corners of the sensitivity table are called, respectively, false negatives [P(A3|B1)] and false positives [P(A1|B2)]. They both measure inaccuracy of the prediction device.




Solving the Value of Information Decision Tree. ''''Before we can solve the expanded decision tree, we must fill in the remaining probabilities in the lower portion, which are calculated with Bayes' Theorem. First,

RTENOTITLE

Similarly,

RTENOTITLE

And

RTENOTITLE

Next, we calculate the conditional probabilities.

RTENOTITLE

and

RTENOTITLE

We leave it to the reader to verify that the expanded decision tree now has a value of $48.6 million, whereas the original decision tree has a value $44 million (= 0.6 ×100 –0.4 × 40). By definition, the value of information is the difference between the new and old decision tree values; value of information = $48.6 – $44 = $4.6 million. We conclude that we should be willing to pay up to $4.6 million to purchase the 3D seismic interpretation.

Monte Carlo Simulation

Monte Carlo simulation begins with a model, often built in a spreadsheet, having input distributions and output functions of the inputs. The following description is drawn largely from Murtha. [16] Monte Carlo simulation is an alternative to both single-point (deterministic) estimation and the scenario approach that presents worst-case, most-likely, and best-case scenarios. For an early historical review, see Halton. [17]

A Monte Carlo simulation begins with a model (i.e., one or more equations together with assumptions and logic, relating the parameters in the equations). For purposes of illustration, we select one form of a volumetric model for oil in place, N, in terms of area, A; net pay, h; porosity, φ; water saturation, Sw; and formation volume factor, Bo.

RTENOTITLE....................(10.6)

Think of A, h, φ, Sw, and Bo as input parameters and N as the output. Once we specify values for each input, we can calculate an output value. Each parameter is viewed as a random variable; it satisfies some probability vs. cumulative–value relationship. Thus, we may assume that the area, A, can be described by a log-normal distribution with a mean of 2,000 acres and a standard deviation of 800 acres, having a practical range of approximately 500 to 5,000 acres. Fig. 10.16 identifies and shows the distributions for each of the input parameters.

A trial consists of randomly selecting one value for each input and calculating the output. Thus, we might select A = 3,127 acres, h = 48 ft, φ = 18%, Sw = 43%, and Bo = 1.42 res bbl/STB. This combination of values would represent a particular realization of the prospect yielding 84.1 million bbl of oil. A simulation is a succession of hundreds or thousands of repeated trials, during which the output values are stored in a file in the computer memory. Afterward, the output values are diagnosed and usually grouped into a histogram or cumulative distribution function. Figs. 10.17 and 10.18 show the output and the sensitivity chart for this model.

Selecting Input Distributions. Log-normal distributions are often used for many of the volumetric model inputs, although net-to-gross ratio and hydrocarbon saturation are seldom skewed right and are always sharply truncated. Triangles are also fairly common and are easy to adapt because they can be symmetric or skewed either left or right. Sometimes, the distributions are truncated to account for natural limits (porosity cutoffs, well spacing). When all the inputs are assumed to be log-normal, with no truncation, and independent of one another, the product can be obtained analytically.

Shape of Outputs. In this example, regardless of the distribution types of the inputs, the output is approximately log-normal. That is, the reserves distribution is always skewed right and "looks" log-normal. In fact, a product of any kind of distributions, even with skewed-left factors, has the approximate shape of a log-normal distribution. For our first example, Fig. 10.17 displays the best-fitting log-normal curve overlaying the output histogram.

Applications of Monte Carlo Simulation. Although decision trees are widely used, they tend to be restrictive in the type of problems they solve. Monte Carlo simulation, however, has a broad range of applicability. For that reason, we devote an entire section to them rather than listing a few applications. Suffice it to say that Monte Carlo simulation is used to answer questions like: "What is the chance of losing money?" "What is the probability of exceeding the budget?" and "How likely is it that we will complete the well before the icebergs are due to arrive?"

Sensitivity Analysis. Ask anyone what sensitivity analysis means and they are likely to tell you it has to do with changing a variable and observing what happens to the results. That is the gist of it, but the concept is much broader. We begin with traditional methods, compare their Monte Carlo simulation and decision tree counterparts, and then discuss some extensions and refinements.

Tornado Diagrams. The traditional tornado chart or diagram consists of bars of various length indicating the range of values of some key output (cost, reserves, NPV) associated with the full range of values of one input (some line item cost, some geological attribute such as porosity, capital investment, for example). The calculations are done by holding all but one variable fixed at some base value, while the single input is varied.

Although this interpretation is often useful and very widely used in presentations, it is flawed in several ways.

  • Holding all variables but one fixed presumes the variables are fully independent. Many models have pairs of inputs that depend on each other or on some third variable; when one parameter increases, the other one tends to increase (positive correlation) or decrease (negative correlation).
  • The base case at which all but one variable is held constant might be a mean or a mode or a median. There is no firm rule.
  • There may not be a minimum or maximum value for a given input. Any input described by a normal or log-normal distribution has an infinite range. Even if we acknowledge some practical limit for purpose of the exercise, there is no guideline what those limits should be (e.g., a P1 or P5 at the low end).
  • Focusing on the extreme cases sheds no light on how likely it is to be that extreme. There is no convenient way (and if there were, it would almost certainly be incorrect) to see a 90% confidence interval in these bars that make up the tornado chart.


All this is not to say that tornado charts are worthless. On the contrary, they are "quick and dirty" methods and can help us understand which inputs are most important. It is just that we do not want to rely upon them when better methods are available.

Spider Diagrams. Like tornado charts, a spider diagram is a traditional, but somewhat limited, tool. Again, one holds fixed all but one variable and examines how the output changes (usually measured as a percent change) as we vary that one input (usually by a few specific percentages). Typically, we might vary each input by 5, 10, and 20% and see how much the output changes. Often the percent change is not linear, causing the resulting graph to have broken line segments, accounting for the name: spider diagram.

As with classical tornado charts, the spider diagram makes several assumptions, most of which are unrealistic.

  • The variables are completely independent (no correlation or conditionals between them).
  • The same range (plus or minus 20%) is suitable for each of the inputs, whereas some inputs might have a natural variable range of only a few percent, while others could vary by 50 or 100% from the base case.
  • The base case is again arbitrary, possible being the mean, median, or mode of each input.

Again, while these restrictions make the spider diagram less than perfect, it is often a good first pass at sensitivity and is widely used in management circles. See Figs. 10.19 and 10.20, respectively, for examples of tornado and spider diagrams.

Monte Carlo Simulation Sensitivity: Regression and Correlation Methods

At the completion of a Monte Carlo simulation, the user has available two robust methods of sensitivity analysis. Consider the database consisting of one output, Y, and the corresponding inputs, X1, X2, ..., Xn. We can perform multiple linear regressions of Y on the Xi and obtain the βi values, numbers between –1 and +1, which indicate the fraction of standard deviation change in the output when the ith input changes by one standard deviation. That is, suppose βi = 0.4, Y has a standard deviation of 50, and Xi has a standard deviation of 6. Then, changing Xi by 6 units would change Y by 20 units.

An alternative form of sensitivity from the Monte Carlo simulation is obtained by calculating the rank-order correlation coefficient between Y and Xi. These values also lie between –1 and +1 and indicate the strength of the relationship between the two variables (Xi and Y). Both regression and correlation are useful. While it may seem more natural to think in terms of the regression method, the xy scatter plot of the Y vs. Xi can be a powerful tool in presentations. It illustrates how a small sample from a key input (i.e., one with a high correlation coefficient) might restrict the output to a relatively narrow range, thus aiding in the interpretation of the sensitivity plot. Both of these methods can be presented as a "tornado" chart, with horizontal bars having orientation (right means positive, left means negative) and magnitude (between –1 and 1), thereby ranking the inputs according to strength or importance. Fig. 10.19 shows the chart for rank correlation; the corresponding chart for regression would be quite similar.

Decision Tree Sensitivity

Decision trees also have inputs and outputs. The inputs consist of the values (typically either NPV or cost) or the probabilities of the various outcomes emanating from the chance nodes. Sensitivity analysis amounts to selecting one of these inputs and letting it vary throughout a range, recalculating the decision tree with each new value, then plotting the output (the root decision value) as a function of the chosen input range, which yields a piecewise linear graph for each of the root decision options.

For instance, consider the example introduced earlier concerning whether to drill on or set pipe as we approach a possibly overpressured zone (see Fig. 10.13). By varying the chance that the zone is overpressured from 0.1 to 0.5 (around the base case value of 0.2), we calculate the cost of the two alternatives (Fig. 10.21) and see that only for a very small chance of overpressure values would it be correct to drill forward, and for the other case, setting pipe is a safe and low-cost choice. Similarly, we could perturb the cost of encountering overpressure from the base case value of 100 to a low value of 50 and a high value of 200 and obtain a similar graph.

Finally, one can vary two inputs simultaneously. That is, we could consider all combinations of the P(kick) and cost of kick. This is called a two-way sensitivity analysis in contrast to the one-way analysis already described. It is helpful to have software to handle all these cases, which are otherwise tedious. The graph for the two-way sensitivity analysis is difficult to interpret, being a broken plane in three dimensions. Alternatively, we can generate a rectangle of combinations and color-code (or otherwise distinguish) them to indicate which ones lead to the choice of setting pipe.

In the end, however, sensitivity analysis for decision trees resembles more the deterministic methods of the traditional tornado plots or spider diagrams than it does the more robust sensitivity of Monte Carlo simulation. In fact, software packages often offer these charts to present the results. In spite of the limitations of these methods, it is imperative that anyone using decision trees do a careful job of sensitivity analysis and include those results in any presentation.

Data Analysis

Regardless of the principal tool used in risk analysis—Monte Carlo simulation or decision trees—empirical data may play an important role. Estimating the probabilities and values for a decision tree is often done by examining historical data. Similarly, the input distributions selected for a Monte Carlo model are easier to justify when analogous data is available to support the choices of distribution type and value of defining parameters, such as mean and standard deviation.

There are two procedures we can follow when given a set of data, depending on how willing we are to make any assumptions about them. We can make no assumptions about any underlying distribution, describing the data in terms of mean, median, mode, range, quartiles or deciles, and the like. We can draw a stem-and-leaf diagram, a histogram and/or a cumulative distribution, looking for bimodality, outliers, and other anomalies. We can assume the data is a sample from some particular population. We can calculate standard deviation and skew. We can go on to find one or a few possible distribution types and defining parameters that would be likely candidates for this population. Stated tersely, we find the best-fitting probability distribution for the data.

The first method is straightforward. Using a spreadsheet, we can invoke the functions AVERAGE, MEDIAN, MODE, MIN, MAX, COUNT and so on, referencing the column or row of data, or we can use the menu sequence, Tools/Data Analysis/Descriptive Statistics, then Tools/Data Analysis/Histogram.

The second method requires software that uses a "goodness-of-fit" metric to compare a fitted density function to the data's histogram. The most popular one is the chi-square test, defined as: χ2 = sum(di2/yi), where di is the difference between the ith data point and yi, the function's prediction for that point. The distribution that minimizes this sum of normalized squared errors is deemed the best fitting curve.

While this process seems simple, some caution is advised. For example, bearing in mind that the density function is supposed to pass as close as possible to the data (in the sense of minimizing the value χ2, it is obvious that the value of the chi-square best-fit statistic depends on the number of classes one chooses for the histogram. Nevertheless, the software generally yields a few good fits for your selection—distributions that would have very similar results in a model.

To avoid the dependence on number of classes, you might choose one of the two other popular fitting metrics, namely the Anderson-Darling and the Kolmogorov-Smirinov. Neither depends on the number of histogram classes because they use numerical integration.

This curve fitting—while resembling the common least squares, linear regression procedure of finding the best linear relationship Y = mX + b —differs in several respects.

  • Linear regression requires only three or four points to establish a sensible trend between Y and X; but density-function fitting requires a dozen or so points or more to establish a histogram with a few classes and a few points per class.
  • Linear regression is intuitive; anyone can draw a fairly good line through a scatter plot, but not many people are good at sketching log-normal or normal curves, and the best-fitting triangles are often surprises.
  • The subroutines to minimize the goodness-of-fit function χ2 are not as simple as the familiar formula for regression, often given as an exercise in Calculus II as soon as the student knows how to take simple partial derivatives of quadratic functions.


To repeat, one should use the curve-fitting software with care. A few other things to note:

  • Often the best-fitting curve is not one of the familiar distributions, yet there is almost always a familiar type that is nearly as good a fit.
  • The software may require that the user specify whether to fix the left bound of the distribution at some constant such as zero to obtain a good fit of a log-normal distribution, but this rules out normal curves and restricts triangles considerably.
  • The software requires a minimum number of points to work properly. Check the user manual.
  • Using the cumulative histogram and cumulative distribution for fitting always looks like a better fit than the density function and the histogram.

Using Risk Analysis to Rank Investments. Decision trees explicitly compare two or more alternatives and choose the one having the best expected value. In Monte Carlo simulation, the "answer" is simply one or more output distributions—not a single number. Suppose we are modeling reserves, for example. The output is a distribution having a mean and a standard deviation and skewness and a set of percentiles. When we include the dry-hole case, the distribution will not take a simple shape of a log-normal or normal distribution, but would have a spike at zero and one or more lumps depending on the possibility of two or more layers or components (see Fig. 10.22). Similarly, if we are modeling NPV, we will often get a complicated distribution. Now suppose we had a competing prospect and estimated its reserves and its NPV. The question becomes, "Is there some way to compare these distributions to rank the two prospects?" There are numerous methods to rank and compare. We mention a few of them.

Let μA and μB be the means and σA and σB the standard deviations of distribution of reserves for two prospects A and B, and let pA and pB be their respective chances of success. Here are some possible ranks.

  • According to the larger of μA and μB.
  • According to the larger of pAμA and pBμB.
  • According to the larger of μA/σA and μB/σB.


A 2D ranking can be realized by cross plotting (μA,σA) and (μB,σB). This works best with several prospects: where we look for dominance in the diagonal direction and where μ gets bigger and σ A gets smaller. This is essentially the method of portfolio optimization. What all these metrics, except the first one, have in common is that we scale back the mean by some factor of risk.

Now, let μA and μB be the means and σA and σB the standard deviations of distribution of NPV for two investments A and B, and let IA and IB be their respective MEAN investment (you could be fancy and treat investment as a distribution). Next, we list some possible ranks.

  • According to the larger of μA and μB.
  • According to the larger of μA/IA and μB/IB.
  • According to the larger of μA/σA and μB/σB .
  • By cross plotting ( μA, σA) and (μB, σB) and looking for a dominance in the diagonal direction where μ gets bigger and σA gets smaller.
  • A similar cross plot but using the semistandard deviation obtained by averaging those squared deviations from the mean for which the value is less than the mean. This is the traditional portfolio optimization metric leading to the efficient frontier.
  • According to the larger of μA/(μA – P5A) and μA/(μB – P5B). [This metric is somewhat inappropriately named risk-adjusted return on capital (RAROC) and P5 is called value at risk (VAR)].


Whatever measure you use that reduces a complex set of information—in this context, one or more probability distributions—to a single value or to a pair of values to be plotted in a scatter plot, you should know that it will be imperfect. One reason for so many different metrics is that people constantly find fault with them. The usual conflict is to have two investments, A and B, where A is ranked higher by the metric chosen, only to find that everyone agrees that B is more attractive. One specific example the authors were involved with used RAROC. The investment involved a government who could default at any point in time, causing a loss of investment and termination of revenue. The probability of default was assigned each time period. After the base model was built, the probability of default was reduced (creating a more attractive investment), and yet, the RAROC decreased.

Optimization

Classical mathematical programming, which includes linear programming, features a standard optimization problem, which we shall describe in terms of NPV.

Suppose there is a fixed exploration budget, which you must decide how to allocate among four types of drilling prospects. For each, you know the chance of success, the range of drilling and completion cost, and the corresponding ranges of discovery size and ultimate value. You thereby assign each a risk, ranging from low to high. Your objective is to maximize NPV, but you want to avoid "too much" risk.

The deterministic version of this problem seeks to maximize NPV constrained by the limit on capital and uses average values for everything. The optimal solution is described by a budget allocation and the resulting average NPV. The user would have to decide what risky means. For example, drilling all high-risk wells might be too risky.

The probabilistic version assumes distributions for all well costs, as well as the NPV for the successes, and furthermore assigns a P(S) for each prospect. One additional type of constraint can be included: we can describe "riskiness" of NPV by the coefficient of variation (CV) of the NPV distribution or some high-end percentile, say P90. Here is one way to state the optimization problem.

Optimizing an Exploration Program. DDD Enterprises has investment prospects in four world locations, called ventures, and must decide how to allocate it exploration budget among them. Among its objectives are to maximize NPV and to avoid large cost overruns. Technical experts have modeled estimates for drilling and completion costs as well as NPV for discoveries. These distributions, along with the chances of success for each group, are listed in Table 10.8.

In three of the countries, prior commitments require that a minimum number of wells be drilled. Each country has an upper limit on the number of wells established by either available prospects or drilling rigs. The board of directors has urged that estimated average exposure be limited to $170 million. Moreover, they require a 90% confidence level for the actual exposure to be less than $200 million. Given these constraints, the Board wishes to maximize average NPV.

Exposure is defined to be the total of drilling cost (number of wells times average dry-hole cost) plus completion cost (number of successful wells times average completion cost). Average exposure is found by assuming a weighted average of successes [P(S) times number of wells drilled]. All prospects are assumed to be independent. Binomial distributions are used for successes.

Running the optimization consists of batch processing 50 or 100 or more Monte Carlo simulations to find the one that maximizes mean NPV, while honoring the constraints on exposure and the individual number of wells per country. Any simulation resulting in a distribution of exposure with P90 exceeding $210 million is summarily rejected.

Real Options

One of the recent methods of risk analysis is real options. Borrowing the idea from the investment community, proponents argue that many of our assets possess characteristics similar to a financial option. First, we review simple puts and calls and then outline their counterparts in both the upstream and downstream components of our business.

  • A financial option always references a specific underlying asset, which we shall envision as a share of stock.
  • The investor pays for the option, an amount called the option price or premium.
  • A call option (or simply a call) is the right to buy one share of a stock at a given price (the strike price) on or before a given date (the exercise date).


A put option (or simply a put) is the right to sell one share of a stock at the strike price on or before the exercise date. The value of the option on the exercise date is either the premium or the difference between the market price and the strike price, whichever is greater. That is, we do not exercise the option unless it is to our advantage.

A so-called European option requires that the purchase be made on the exercise date; a so-called American option allows the purchase on or before the exercise date. European options are simpler to model and think about. For instance, the decision to exercise a European option is straightforward: do it if you are "in the money," (i.e., if the value is positive on the exercise date).

A real option is similar to a financial option but is far more general. Corporations increasingly recognize the implicit value of certain aspects of their business. Specific types of real options that might be available in any development are listed next.

  • Changing the scope of the project.
  • Changing the time horizon: moving the start date up or back; extending or shrinking the duration, even abandoning the project.
  • Changing the mode of operation.


While there are great similarities between financial and real options, their differences are noteworthy. For instance, the underlying asset of a financial option is a share of stock or some other asset available in a market. In theory, the option holder has no influence on the price of that asset (although in practice, things get more complicated; the option holder can buy or sell large quantities of the asset). A real option, however, is usually some kind of project or investment, and the holder of the option may have considerable influence over its value.

Software for Real Options. There is special software for real options. At the time of this writing, there is no inexpensive special software analogous to Monte Carlo simulation or decision trees that can be purchased for less than U.S. $1,000 and run as an add-in in Excel or as a standalone program.

Nevertheless, an experienced Monte Carlo simulation expert can model real options in Excel. Essentially, one must be careful to acknowledge the different possible times at which the option can be exercised, quantify the value, provide branches for the different decisions (whether to exercise or not), and alter the subsequent cashflow properly.

Combination of Tools

Risk optimization combines Monte Carlo simulation with classical optimization (e.g., linear programming, quadratic programming). Another combination that has been used since the late 1990s involves Monte Carlo simulation and decision trees. In essence, any value on the decision tree may be replaced with a continuous probability distribution. Then, on each iteration, samples are chosen from these distributions, and the new decision tree is created and solved, yielding an expected value for the root node. After a few hundred iterations, this root value distribution can then be reviewed. A refinement to this method captures which choices are selected on each iteration. At the end of the simulation, the report can indicate the percentage of time that each decision branch was selected. A branch selected a large percentage of the time would be regarded as an optimal path. This idea is analogous to project-scheduling software run with Monte Carlo enhancements, in which we capture the percentage of time that each activity appears on the critical path. Needless to say, combining tools in this way makes it even more imperative that the user be cautious in designing, testing, and implementing any model to avoid creating unrealistic realizations.

Risk Analysis, Risk Mitigation, and Risk Management

Risk analysis involves the modeling and quantification of uncertainty. Risk mitigation happens after the analysis and focuses on those unacceptable ranges of possibility (of cost overruns, shortfalls of reserves or NPV, and so on). Risk management is sometimes used as an inclusive term that encompasses risk analysis and risk mitigation and other times is used interchangeably with risk mitigation. In either case, risk management concentrates on what you do after the risk analysis.

Once the drivers of uncertainty have been identified, the focus shifts to ways to reduce the uncertainty. If a reserve model proves to be most sensitive to the bulk volume of the prospect, the company may be more willing to acquire 3D seismic data. If the cash flow of a proposed gas-fired electric power plant shows to be highly sensitive to natural gas price, then one strategy would be to hedge gas prices. When drilling an infill well where there is a great deal of uncertainty about initial production, it may make sense to fund a program of 10 wells or 50 wells rather than a single well, so on average, the wells produce according to expectations. In essence, the average of a sample tends toward the population average.

In general, risk mitigation is protection from unfavorable situations, using a variety of instruments and tools, including: hedges; turnkey, price- or cost-lock contracts; guarantees; insurance; partnering and diversification; increased level of activity to help achieve the law of averages; and alternate technology or redundancy. One key to risk management when doing Monte Carlo simulation is the sensitivity chart, which tells us the inputs that really matter. Those are the ones that deserve our attention. While it may be an important variable to some specialist, any input that fails to make the top 10 or so, on the sensitivity chart, does not deserve additional resources, assuming we are looking for reduced uncertainty in the outputs. One of the real benefits of risk analysis is the prioritizing of variables to direct the company to those things that could make a difference. Murtha[14] shows a detailed comparison between Monte Carlo simulation and decision trees by solving a problem using both methods.

Typical Applications of Technologies


Managers and engineers alike are concerned about the bottom-line indexes net present value (NPV) and return on investment (ROI) and use these measures to aid in their decision-making, but they also worry about capital requirements and, in our business, reserves. NPV and ROI are the result of a complex interrelationship of capital investment, production, prices, operating expenses, schedule, fiscal terms, and domestic taxes. Of course it is the uncertainty in these estimates that makes the "solution" of these problems and making a decision, based on the prediction, so interesting. It is also that uncertainty that makes these considerations ideal candidates for using risk analysis methods.

This section features several specific applications of Monte Carlo simulation and, in so doing, allows us to comment on numerous issues that are faced by engineers when using these problem-solving techniques. The problems we model include cost estimates, resource and reserve estimations, production forecasts, and cash flows. The issues include topics that are discussed elsewhere in this chapter: choice of input distribution type (Sec. 10.3), handling rare events, discrete vs. continuous variables (Sec. 10.3), correlation among inputs, and sensitivity analysis (Sec. 10.4).

Cost and Time Estimates

Estimating capital, one of the main ingredients for any cash flow calculation, is largely in the domain of the engineering community. Petroleum engineers are responsible for drilling costs and are often involved with other engineers in estimating costs for pipelines, facilities, and other elements of the infrastructure for the development of an oil/gas field.

All practicing engineers have heard horror stories of cost and schedule overruns, and some have even been involved directly with projects that had large overruns. Why did these overruns occur, and what could have been done to encompass the actual cost in the project estimate? Overruns can result from inefficiencies, unscheduled problems and delays, changes in design or execution, or a host of other reasons. The upstream oil/gas industry is a risky business. One inherently risky operation that we routinely undertake is drilling and completing a well. Thus, it should come as no surprise that estimating the total cost and time of a drilling prospect is a common application of uncertainty analysis, principally Monte Carlo simulation.

Cost models fall into the general class of aggregation models—we add line-item costs to get a total cost. These line items are specified as ranges or probability distributions, and the total cost is then a sum of the line items.

Simple Authorization for Expenditure (AFE) Model. Table 10.9 shows a probabilistic AFE for drilling a single well. The line items are described by symmetric triangular distributions. There are two subsections of the model, each with a cost subtotal.

The first cost subtotal comprises the cost of goods and services (consumables and tangibles) that are not time-dependent; the second cost subtotal represents the rig cost (i.e., the costs attributable to accomplishing each phase). The two ultimate outputs are Cost Total (the sum of the two subtotals) and Rig Time.

The user enters estimates for the minimum, most likely, and maximum for each estimated line item. In the top portion, the user enters estimates for line-item costs. In the bottom portion, the user enters estimates for the activity times. The costs associated with these tasks are then calculated as the time to complete the task multiplied by the rig day rate.

Assumptions include:

  • Items in the activity portion include all aspects of drilling. Thus, the "9 5/8-in. section" would include any tripping, minor expected delays, running casing, and cementing, in addition to drilling. (See comments on level of detail.)
  • There is no correlation between any pair of items.
  • The rig day rate is either a constant (if the rig is under contract) or a distribution (no contract).
  • The estimate covers only "scheduled events" and does not take into account either change of scope or "trouble time."

Some of these assumptions make the model simpler to design but less realistic. These shortcomings are easy to overcome, as we address later in this section.

Why Use Triangular Inputs? In our example of a simple AFE model, we chose symmetric triangular distributions for the inputs. Why? Our example came from a location where there were many offset and analogous wells on which to base our cost and time distributions. Many cost engineers are trained to provide a base cost, which is frequently viewed as a most likely value together with a downside and upside (sometimes stated as a plus and minus percentage of the base cost). The triangular distribution is therefore a natural starting point. In practice, many line-item ranges are right-skewed, acknowledging the belief that time and cost have more potential to exceed the base case than to fall short.

Another skewed-right distribution is the log-normal, and it is also popular for line items. One drawback of the log-normal for cost estimates, however, is that it is fully determined by specifying only two points, not three. Although some users take three points and convert to a log-normal, one should be careful with the process. Suppose, for instance, that we are given the three values 30, 60, and 120 for a low, most likely, and high estimate for some line item. We could use the two extreme values as a P2.5 and P97.5 and assume that this 95% range (confidence interval) between them is approximately four standard deviations. The logic is that for normal distributions, the range would be exactly 3.92 standard deviations. For log-normal distributions, there is no simple rule, though experimentation would lead to a reasonable estimate. Once the standard deviation is estimated, one other value determines a unique log-normal, and the user may typically decide that the mid-range value will serve for a mode, P50, or mean.

Resulting Time and Cost Estimates. Figs. 10.23 and 10.24 show the cumulative distribution of the AFE well time and the corresponding sensitivity chart. Because of the dominance of the time-dependent costs in this particular model, the cumulative distribution for total well cost, Fig. 10.25, and its corresponding sensitivity graph, Fig. 10.26, are quite similar to those of the well time. The drilling AFE calculated probabilistically now allows us to report that we are 90% confident that the well will cost between U.S. $10.1 and $11.5 million with an expected (mean) cost of $10.8 million. Similarly, we are 90% confident that the well will take between 58 and 70 days, with an expectation (mean) of 64 days to drill. The sensitivity charts indicate that the driving parameters in both the time and cost to drill this well are the 9 5/8-in. section—the testing and the completion phases. If we wanted to reduce our uncertainty and have the biggest impact on the well time and cost, we would focus our attention (i.e., engineering skills) on those phases.

Handling Problems

As previously mentioned, one of the assumptions in this AFE model, as it stands, is that there are no unscheduled or problem events included. In reality, there rarely, if ever, is a well drilled that does not encounter one or more unscheduled events. The event may impact either the cost or the schedule or both. Because we want the model to be as realistic as possible, we must include the possibility of these unexpected events in our model.

Mechanics of Modeling Problems. A simple method of handling various problems encountered in drilling is to introduce a discrete variable that takes on the value zero when no problem occurs and the value one when there is a problem. We assign the probability of a one occurring, that is, that a problem will occur on any given iteration. Either a binomial distribution or a general discrete distribution may be used. Table 10.10 shows the modified drilling AFE worksheet with two rows inserted to accommodate this modification. In the first row (row 26), we have a cell for the probability of occurrence of the problem—in this instance, stuck pipe—and another cell for a binomial distribution that references the problem's probability. The probability of having stuck pipe in this example is 30%, obtained from our experience with similar wells in this area. What if we had no data? We would assign a probability based on our expert engineer's opinions.

The second row contains a new line item for the time needed to free the stuck pipe. In cell F27, we multiply the values sampled from the two distributions (logically equivalent to an "if" statement) to get either zero (in case the binomial distribution returns zero, signifying no stuck pipe in that iteration) or some value between 5 and 20 days. The corresponding cost in cell G27 is like any other formula in that column, except in this case it takes the value zero some of the time.

Effect of Including One or More Potential Problems. Fig. 10.27 shows the probability density function (PDF) for the resulting AFE well time estimate, when the potential problem of having stuck pipe is included. Notice that while the graph appears right-skewed, which is more in keeping with our experience (i.e., wells are more likely to have overruns), the graph is actually bimodal. In 70% of the cases, we do not have stuck pipe and everything goes as planned. In 30% of the simulations, we have stuck pipe, and there is an associated time and cost uncertainty to recovering from that problem. What has happened to the sensitivity diagram (Fig. 10.28)? Now the primary driver is whether we get stuck or not. Maybe it is time to look at the alternative drilling fluid system or the new technology that can get us through the whole section quicker, thus reducing significantly our chances of getting stuck.

Points Worth Pondering

Handling Correlation Among Line Items. In many cases, when one line-item cost is high, other line-item costs are likely to be high. Steel price changes, for instance, can cause simultaneous changes in several line items of a cost estimate. In such cases the user can assign a correlation coefficient to appropriate pairs of line-item distributions. The level of correlation is pretty subjective unless one has data. For example, if we track average unit prices for two items on a weekly or monthly basis, we can use the CORREL function in Excel to calculate the correlation coefficient. When data are not available, one method is to try two or three correlation coefficients (say 0.3, 0.6, 0.9) and examine the impact on the model outputs. For cost models, all (positive) correlation increases the standard deviation of the outputs; correlation does not affect the mean.

Central Limit Theorem Effects. Cost models follow the pattern of any aggregation model—the outputs tend to have relatively narrow ranges compared to the inputs. As a rule of thumb, summing N similar line items will yield a total with a coefficient of variation that shrinks by a factor of √N . The central limit theorem (see Sec. 10.3) says this reduction is exactly true when the distributions are identical, normal, and uncorrelated. In practice, the rule is surprisingly accurate, provided that one or two very large items do not dominate the sum. Also, the rule tends to lose accuracy when several items are highly positively correlated, because the resulting increase of extreme input values tends to spread out the results. The results of an aggregation model tend to be skewed rather than normal when the model includes events simulated using binomial or discrete distributions, such as those used for problem events.

Level of Detail. Historically, deterministic drilling-AFE models might have hundreds of line items—in part, for accounting purposes. Monte Carlo AFE models, however, tend to have a few dozen items. Construction cost estimating models can be even more detailed. One operator had a work breakdown structure (WBS) with 1,300 lines. While it is possible to transform such a detailed model into a Monte Carlo model, the drivers (the most sensitive variables) tend to be 20 or fewer. Therefore some users have two models—one very detailed and the other a more coarse, consolidated version. Keep in mind that one reason for doing risk analysis is to identify key inputs and then try to manage them. Many times the risk analysis model will be designed to optimize use of historical data while allowing the user to track a meaningful level of detail as the project progresses.

Software continues to improve, but very large, highly detailed models can be difficult to manage. For instance, there is usually some limit on the size of a correlation matrix, yet the models with hundreds of line items will necessitate incorporating correlation (otherwise, the central limit theorem effects will reduce the resulting distribution's standard deviation to an unrealistically low level of uncertainty). The most popular Monte Carlo application packages are add-ins to Excel, which has a limit on matrix size (256 columns as of spring 2002).

Summary. Capital expenditure and budgeting models, such as cost and time models, are good examples of aggregation models. In these models, we must address what type of distributions to use to describe the input parameters, what correlation exists among the input parameters, what level of complexity or detail is appropriate to the model, and what problem events to incorporate. The results we obtain from these models allow us to plan and budget based on a range of outcomes; the sensitivity charts focus our attention on the drivers to apply our risk management and risk-mitigation skills. A simple drilling AFE model (with each activity finishing before the next one begins) was used to illustrate these risk analysis concepts. More complex time and cost models, such as those with concurrent tasks, can also be solved with more complicated spreadsheet models or other existing software.

Resources and Reserves Models

Estimating resource and reserves crosses the disciplines between geoscientists and petroleum engineers. While the geoscientist may well have primary responsibility, the engineer must carry the resource and reserve models forward for planning and economics. Volumetric estimates of reserves are among the most common examples of Monte Carlo simulation. They are calculated for known producing wells, reservoirs, and fields. They are calculated for exploration wells, on mapped prospects and plays, and on unmapped prospects. The resource and reserve estimates are important in their own right, but in addition, these estimates are inputs and drivers for the capital-expenditure, production, and, ultimately, cash-flow models.

Volumetric Formulas. Consider the following typical volumetric formula to calculate the gas in place, G, in standard cubic feet.

RTENOTITLE....................(10.7)

where

A = area, acres,
h = net pay, ft,
φ = porosity,
Sw = water saturation,
Bg = gas formation volume factor,
and
E = recovery efficiency.

In this formula, there is one component that identifies the prospect, A, while the other factors essentially modify this component. The variable h, for example, should represent the average net pay over the area, A. Similarly, φ represents the average porosity for the specified area, and Sw should represent average water saturation. The central limit theorem guarantees that distributions of average properties—net pay, porosity, and saturation—will tend to be normal. Another consequence of the theorem is that these distributions of averages are relatively narrow (i.e., they are less dispersed than the full distributions of net pays or porosities or saturations from the wells, which might have been log-normal or some other shape. The correct distributions for Monte Carlo analysis, however, are the narrower, normal-shaped ones.

Input Parameter Estimation. While we often do not have ample information to estimate the average porosity or average saturation, we are able to imagine what kind of range of porosities might exist from the best to the worst portions of the structure. We do have ample information from many mature fields where material balance could provide estimates. We also have extensive databases with plenty of information, from which some ranges of average values could be calculated and compared to the broader ranges of well data.

Always remember that, as with all else in Monte Carlo simulation, one must be prepared to justify all realizations (i.e., combinations of parameters). Just as we must guard against unlikely combinations of input parameters by incorporating correlations in some models, we should ask ourselves if a given area or volume could conceivably have such an extreme value for average porosity or average saturation. If so, then there must be even more extreme values at certain points within the structure to produce those averages (unless the structure is uniformly endowed with that property).

Perhaps the contrast is even easier to see with net pays. Imagine a play where each drainage area tends to be of relatively uniform thickness, which might be the case for a faulted system. Thus, the average h for a structure is essentially the same as seen by any well within the structure. Then the two distributions would be similar. By contrast, imagine a play where each structure has sharp relief, with wells in the interior having several times the net sand as wells near the pinchouts. Although the various structures could have a fairly wide distribution of average thicknesses, the full distribution of h for all wells could easily be several times as broad.

The traditional manner of describing area and treating it as a log-normal distribution is based on prospects in a play. If we were to select at random some structure in a play, then the appropriate distribution would likely be a log-normal. Sometimes, however, not even the area parameter should be modeled by a log-normal distribution. The distribution for A could easily be log-normal if the drainage areas were natural. In a faulted system, however, where the drainage areas were defined by faults, the distribution need not be log-normal. Suppose a particular prospect is identified from 3D seismic. We have seen situations where the base case value of area or volume is regarded as a mode (most likely). When asked to reprocess and or reinterpret the data and provide relatively extreme upside (say P95) and downside (say P5) areas or volumes, the results are skewed left—there is more departure from the mode toward the downside than the upside. Because the conventional log-normal distribution is only skewed right, we must select another distribution type, such as the triangular, beta, or gamma distribution.

What if this is correct: that we should be using narrower and more symmetrical distributions for several of the factors in the volumetric formula? Does it matter in the final estimate for reserves or hydrocarbon in place? How much difference could we expect? The right way to judge whether the type of distribution matters for an input variable is to compare what happens to the output of the simulation when one type is substituted for another.

Variations of the Volumetric Formula. Among the numerous variations of the volumetric formulas, there is usually only one factor that serves the role of area in the argument. For instance, another common formula estimates original oil in place (OOIP) by

RTENOTITLE....................(10.8)

Where

Vb = bulk rock volume,
NTG = net to gross ratio,
φ = porosity,
So = oil saturation,
and
Bo = oil formation volume factor.
Here, Vb would be the dominant factor, which could be skewed right and modeled by a log-normal distribution, while the factors NTG, φ, So, and Bo would tend to be normally distributed because they represent average properties over the bulk volume.

Recovery Factors. Recovery factors, which convert hydrocarbon in place to reserves or recoverable hydrocarbon, are also average values over the hydrocarbon pore volume. The recovery efficiency may vary over the structure, but when we multiply the OOIP by a number to get recoverable oil, the assumption is that this value is an average over the OOIP volume. As such, the recovery factor distribution often should be normally distributed. Additional complications arise, however, because of uncertainty about the range of driving mechanisms. Will there be a waterdrive? Will gas injection or water injection be effective? These aspects of uncertainty can be modeled with discrete variables, much like the probability of stuck pipe in the drilling AFE example.

Output From the Simulation, OOIP, or Gas Initially in Place (GIIP). The Monte Carlo simulation yields a skewed right output (loosely speaking, "products are log-normal"), such as shown in Fig. 10.29, regardless of the shapes of the inputs. The result follows from (1) the definition of log-normal: a distribution the logarithm of which is normal, (2) the central limit theorem (sums are normal), and (3) the log of a product is the sum of the logs.

One notable example from Caldwell and Heather[18] uses five triangular distributions, two of which are sharply skewed left, one symmetric, and two slightly skewed right, to obtain coalbed gas reserves as a sharply right-skewed output. Regardless of the shapes of the inputs to a volumetric model—be they skewed right, skewed left, or symmetric—the output will still be skewed right, thus approximately log-normal. The central limit theorem guarantees this, because the log of a product (of distributions) is a sum of the logs (of distributions), which tends to be normal. Thus, the product, the log of which is normal, satisfies the definition of a log-normal distribution.

Points Worth Pondering

Handling Correlation Among Inputs. In the discussion so far, the input parameters have been described, and handled, as if they were each independent of one another. In many geologic settings, however, these input paremeters would have an interdependency. This can be incorporated in our models by using correlation between the appropriate parameters. Some of the correlations that we apply result from fundamental principles in petroleum engineering. One such correlation that should be included in many volumetric models is that in a clastic, water-wet rock, water saturation and porosity are negatively correlated. In the volumetric formula, that relationship leads to a positive correlation between hydrocarbon saturation and porosity. Other correlations may be necessary, depending on the geologic story that goes hand in hand with the resource and reserve estimates. Fig. 10.30 shows the typical impact of positive correlation in a volumetric product model—the resulting distribution has greater uncertainty (standard deviation, range) and a higher mean than the uncorrelated estimate.

Probability of Geologic Success. The hydrocarbon-in-place (resource) estimates become reserve estimates by multiplying by recovery factors. Until we model the probability of success, the implication is that we have 100% chance of encountering those resources. In reality, we must also incorporate the probability of geologic success in both our resource and reserve estimates. If the P (S) for the volumetric example is assigned as 20%, we would use a binomial distribution to represent that parameter, and the resulting reserve distribution will have a spike at zero for the failure case (with 80% probability). Fig. 10.31 illustrates the risked reserve estimate. (Note that the success case is as illustrated in Fig. 10.29.)

Layers and Multiple Prospects. Often a well or a prospect has multiple horizons, each with its chance of success and its volumetric estimate of reserves. A proper evaluation of these prospects acknowledges the multiplicity of possible outcomes, ranging from total failure to total success. If the layers are independent, it is simple to assign probabilities to these outcomes in the manner already discussed. Whether one seeks a simple mean value or a more sophisticated Monte Carlo simulation, the independence assumption gives a straightforward procedure for estimating aggregate reserves.

When the layers are dependent, however, the aggregation problem becomes subtler: the success or failure of one layer alters the chance of success of other layers, and the corresponding probabilities of the various combinations of successes are more difficult to calculate. Moreover, the rules of conditional probability, notably consequences of Bayes' Theorem, provide challenges to those who assign estimates for the revised values. Even in the case of two layers, some estimators incorrectly assign these values by failing to correctly quantify their interdependence. These issues have been addressed by Murtha, [19][20] Delfiner, [21] and Stabell, [22] who offer alternative procedures for handling dependence, be it between layers, reservoirs, or prospects.

Summary. The main factor in the volumetric equation, area or bulk volume or net rock volume, can be skewed left, symmetric, or skewed right. The other factors in a volumetric formula for hydrocarbons in place will tend to have symmetric distributions and can be modeled as normal random variables. Regardless of the shapes of these input distributions, the outputs of volumetric formulas, namely hydrocarbons in place and reserves, tend to be skewed right or approximately log-normal. Many of the natural correlations among the volumetric equation input parameters are positive correlations, leading to reserve distributions that have higher means and larger ranges (more accurately, larger standard deviations) than the uncorrelated estimates. The probability of geologic success can be modeled using a binomial variable. Finally, modeling layers or multiple prospects is accomplished by aggregating the individual layer or prospect estimates within a Monte Carlo simulation. In those cases where there is geologic dependence between the success of the layers (or prospects), that dependence can be modeled using correlation between the binomial variables representing P (S).

If interested in reserves and resources, one should consult the chapter on Estimation of Primary Reserves of Crude Oil, Natural Gas, and Condensate in the Reservoir Engineering and Petrophysics volume of this Handbook, which goes into more detail about types of reserves and the relationship between deterministic and probabilistic reserves.

Production Forecasts

A production engineer is responsible for generating the production forecast for a well, or for a field. Where does the engineer start? Darcy's law gives an estimate of the initial production. Drive mechanism, physical constraint, regulations, reserves, and well geometry influence how long, or if, the well or field will maintain a plateau production rate. Once production drops from the peak or plateau rate, the engineer needs an estimate of decline rate. One can quickly realize that with all these uncertainties, production forecasts are another candidate on which to use risk analysis techniques to help quantify the uncertainty. Ironically, even producing wells with historical data have uncertainty about their decline rates, because of errors or omissions in production monitoring and because of noise or erratic production profiles that allow for various interpretations.

Production Forecast, Single Well. Table 10.11 illustrates a simple spreadsheet model for a single-well production forecast. The model has one main assumption, which is that the production is represented using exponential decline for the oil [q = qi'e–at'', and q(n +1')'' =qn''e–a'], where qi'' is the annual production of the first year, and a is the annual percentage decline rate. While this model used exponential decline, similar models can be built for linear, harmonic, or hyperbolic declines.

Choice of Input Distributions. In this simple single-well production forecast, there are only two input distributions required—production start rate and decline rate. The production in time period one (year one) is estimated from Darcy's Law, a "product model" with factors of permeability, pay (1/viscosity), and so on. Because this is a product, one would expect that the distribution is approximately log-normal. In fact, experience has shown that there is a great deal of uncertainty and variability in production start rates. Thus, not only initial production rate, but the production in each subsequent time period, is right-skewed.

Decline rate, on the other hand, does not typically have a wide variability in a given reservoir environment. If the production is to be maintained for 10 years, it will be impossible to have a very high decline rate. If the decline rate is too low, we will be simulating an unrealistic recovery of reserves over the forecast period. These constraints, whether practical or logical, lead us to conclude that decline rate is best suited to be represented with a normal distribution.

Simulated Production Forecast. Fig. 10.32 shows the production forecast for the well for the first 10 years. The summary graph shows the mean of the profile, as well as the interquartile (P25 and P75) and the 90% confidence interval (P5 and P95). Beware, the figure represents the production for individual years, and connecting the points with their envelope forming the P5 line is extremely unlikely to give something that can be called the P5 production forecast.

Production of Forecast, Multiple Wells. Many times the requirement for a production forecast is not for a single well but for some group of wells. What if our forecast were for a field coming online with 10 wells drilled and ready to produce in Year 1? The model becomes something like shown in Table 10.12, with further questioning needed for correct modeling. Will the wells be independent of one another, or will the production rates and decline rates be correlated from well to well? The resulting production forecast from the uncorrelated wells case is illustrated in Fig. 10.33. Notice that this is an aggregation model, and the year-to-year production distributions are nearly normal, and the ranges are narrower than we might have intuited. Now look at Fig. 10.34, the resulting production forecast for the 10 wells, but now with moderated correlation both between the initial production rates and the decline rates among the wells. The effect of this positive correlation is to increase the standard deviation (but not the means) of the forecast year to year.

Finally, consider the opportunity to participate in a sequence of wells similar to the previous example but where we will have one well come online per year. What will our production forecast look like then? It becomes a sequencing-and-aggregation problem, and one can imagine the spreadsheet shown in Table 10.12 altered so that Well 2 begins in Year 2, Well 3 coming online in Year 3, and so on. Our production forecast is shown in Fig. 10.35 and looks significantly different from the previous examples. Production increases as each new well is brought on in years 1 through 10, although the earlier wells are each individually declining (as in Fig. 10.33). Peak production is achieved in Year 10, and the field is on constant decline thereafter. Next, is a list on how we may use these forecasts.

  • To help with well timing requirements and facility design.
  • To schedule workover frequencies.

To suggest to the drilling engineers the completion geometry that will optimize production (in concert of course with the reserve estimates and spatial considerations of the productive intervals).

  • As input to the economics model(s).


Production Forecast, Workovers or Disruptions. There are many refinements we can make to the model, and one that might come quickly to mind is that there are no interruptions or disruptions to the production. We can implement sporadic or random interruptions by using a binomial variable, where in each time step there is some probability that the production will be stopped or curtailed.

Cash-Flow Calculations. The cash-flow calculation is the one upon which most decisions are based. It is the culmination of the engineering effort. There are three ways that cash flows are currently being calculated under the guise of producing stochastic cash flows. In the first method, deterministic number (usually either most likely or average) estimates are collected from all the engineers (i.e., a single capital expenditure estimate, a single reserve estimate, a production profile, etc.) and then the financial modeler applies some ranges to these estimates (sometimes with input from the engineers) and produces a probabilistic cash flow. This method is clearly inadequate because the probabilistic components must be built from the ground up and not tacked on as an afterthought.

In the second method, P10, P50, and P90 scenarios for each of the inputs to the cash flow model are requested, and then the financial modeler uses a hybrid-scenario approach; all the P10 estimates are combined to get P10 economics, all the P50 estimates are combined to get P50 economics, and all the P90 estimates are combined to get P90 economics. Even if the percentiles are correct for the cash-flow inputs, why would those percentiles carry through to the same percentiles for NPV or internal rate of return (IRR)? In fact, they do not.

In the third method, the correct one, capital expenditures, reserves, and production profiles are retained as probabilistic estimates. The economic model is run as a Monte Carlo simulation, and full probabilistic cash flow (NPV, IRR) estimates result. That is, we build a cash-flow model containing the reserves component as well as appropriate development plans. On each iteration, the field size and perhaps the sampled area might determine a suitable development plan, which would generate capital (facilities and drilling schedule), operating expense, and production schedule—the ingredients, along with prices, for cash flow. Full-scale probabilistic economics requires that the various components of the model be connected properly (and correlated) to avoid creating inappropriate realizations. The outputs include distributions for NPV and IRR.

We extend our example for the single well production profile to predict cash flow, NPV, and IRR for the investment. Table 10.13 shows the new spreadsheet model, which now has columns for price, capital expenditures, operating expenses and revenue.

  • Capital expenditure is an equal investment, occurs once in Year 0, and is a distribution, such as one that would have been obtained from a probabilistic AFE model as discussed earlier in this section.
  • Production decline is as described in the production forecast section.
  • Price escalates linearly at a fixed annual percentage [p(n +1) = pn × (1 + s)], where s could be 5%, for example.
  • Operating expense has a fixed component and a variable component.
  • Year-end discounting (Excel standard).


The output from this model now gives us not only the probabilistic production profile but also probabilistic estimates of NPV and IRR, as illustrated in Figs. 10.36 and 10.37, respectively. What are the drivers in this model for NPV? Fig. 10.38 shows the sensitivity graph, in which production dominates the other variables.

Using the third (correct) method, we can answer questions like:

  • What is the chance of making money?
  • What is the probability of NPV > 0?
  • What is the chance of exceeding our hurdle rate for IRR?


These questions are equally applicable whether the economic model is for evaluating a workover or stimulation treatment, a single infill well, an exploration program, or development of a prospect. For prospect or development planning and ranking, the answers to these questions, together with the comparison of the reserves distributions, give us much more information for decision making or ranking the prospects. Moreover, the process indicates the drivers of NPV and of reserves, leading to questions of how best to manage the risks.

No one will argue that it is simple to build probabilistic cash-flow models correctly. The benefits of probabilistic cash-flow models, however, are significant, allowing us to make informed decisions about the likelihood of attaining specific goals and finally giving us the means to do portfolio optimization.

Engineering and Geoscientific Issues-Avoiding Pitfalls of Deterministic Models


People who espouse risk-analysis methods are sometimes challenged by skeptics to justify the effort required to implement these methods. We must answer the question: "Why bother with risk analysis?" Put another way, "What's wrong with deterministic methods?" A short answer appears in Murtha: [23]

"We do risk analysis because there is uncertainty in our estimates of capital, reserves, and such economic yardsticks as NPV. Quantifying that uncertainty with ranges of possible values and associated probabilities (i.e., with probability distributions) helps everyone understand the risks involved. There is always an underlying model, such as a volumetric reserves estimate, a production forecast, a cost estimate, or a production-sharing economics analysis. As we investigate the model parameters and assign probability distributions and correlations, we are forced to examine the logic of the model.

The language of risk analysis is precise; it aids communication, reveals assumptions, and reduces mushy phrases and buzz words. This language requires study and most engineers have little exposure to probability and statistics in undergraduate programs."

Beyond that, we indicate some shortcomings of deterministic methods.

Aggregating Base Cases-Adding Modes and Medians (Reserves, Cost, Time)

Deterministic reserve-estimates are often described in terms of low-, medium-, and high-side possibilities. Some people think in terms of extremes: worst and best cases together with some base cases (mean, mode, or median). Others report P10, P50, and P90 values. Sometimes, these cases are linked to such categories as proved, proved plus probable, and proved plus probable plus possible. While there is nothing wrong with any of these notions, the logic of obtaining the cases is often flawed. Again, from Murtha: [23]

"Total capital cost is often estimated by adding the base costs for the various line items. A simple exercise shows how far off the total cost can be. Take ten identical triangular distributions, each having 100, 200, and 350 for low, most-likely (mode), and high values, respectively. While the mode of each is 200, the mean is 216.7. Summing these ten triangles gives, as usual, a new distribution that is approximately normal—this one with a mean of 2,167 and a standard deviation of approximately 165. The original mode, 200, is approximately P40. The sum of the modes is approximately P15, far from what might be expected as a 'representative value' for the distribution. In a 2,000-trial simulation, the P1 and P99 values are about 1,790 and 2,550.

If the distributions represented 10 line-item cost estimates, in other words, while there would be a 60% chance of exceeding the mode for any single estimate, there is an 85% chance—about 6 times out of 7—of exceeding the sum of the modes. If we added 100 items instead of just 10, the chance of exceeding the sum of modes is more than 99%. We must be careful how we use most-likely (modes) estimates for costs and reserves. Of course, if there is significant positive correlation among the items, the aggregate distribution will be more dispersed and the above effect less pronounced."

Multiplying Base Cases or P10s (Factors to Yield Reserves or Resources)

When volumetric products are used to obtain reserves estimates, there is a temptation to build the low-side reserves estimate by blithely taking the product of low estimates for the various factors. This is a dangerous business at best. The product of P10 estimates for area, pay, and recovery factor, for example, is approximately P1. For special cases (all distributions log-normal, no correlations), one can find an exact answer, but if you use a different distribution type, include any correlations between inputs (larger area tends to be associated with thicker pay), or change the number of factors (breaking out recovery factor into porosity, saturation, formation volume factor, and efficiency), there are no simple rules of thumb to predict just how extreme the product of P10 values is.

Less obvious is the fact that neither the P50 value nor the modes for the inputs yield either P50 or mode, respectively, for the output, except in very special cases. The mean values of inputs will yield the mean of the product distribution, but only if there is no correlation among inputs. In other words, even the "base-case" reserves estimate, generally, should not be obtained from a product of base-case inputs, except in very special cases.

Including Pilot Projects With P(S) > 0.5

Imagine this. In preparation for a court resolution of ownership, an operating company wishes to estimate the value of its extensive holdings in a major oil field. Complicating factors include several possible programs, some involving new technology. Five of the programs require successful completion of a pilot project. Based on laboratory analysis and somewhat similar development procedures in less harsh environments, the pilots all have high estimates of success, ranging from 65 to 80%. In the deterministic version of the global model, for each pilot they ignore the existence of the others, assume each pilot is successful (because each is greater than 50%), and automatically include the corresponding program. From a probabilistic standpoint, however, the chance of all pilots being successful is quite small (roughly 0.75 or about 5 to 1 against). The actual Monte Carlo model is so inconsistent with the deterministic model that the first pass results show the deterministic estimate (or better) to have only about a 5% chance of happening. Note that in the Monte Carlo simulation the more realistic scenario is used—whereby, on each iteration, the pilot either succeeds and the follow-up program is included or fails, and no contribution is included for the follow-up.

A more correct deterministic method would include P(S) × (value pilot + value of corresponding program), but even this would not shed any light on the range of possibilities. In short, it is difficult to properly account for stages of development when each stage has uncertain levels of success.

Multiple Definitions of Contingency

Cost engineers add contingency to line items or to the total base estimate to account for some uncertainty. Within a company, the rules and guidelines are generally well known and consistently applied. Nonetheless, there are different interpretations among companies. One of the standard definitions says: "Cost contingency is the amount of additional money, above and beyond the base cost, that is required to ensure the project's success. This money is to be used only for omissions and the unexpected difficulties that may arise. ...Contingency costs are explicitly part of the total cost estimate."

By adding contingency to each line item, the total cost estimate contains the sum of these quantities. However, we have seen above the danger of summing deterministic variables. In effect, setting aside some additional funds for each line item tends to generate a larger aggregate contingency than is necessary because it emphasizes the unlikely prospect that all line items will simultaneously exceed their estimates. An alternative use of contingency is to apply a percent to the total cost. This, at least, recognizes the sum of line items as its own distribution.

Not Knowing How Likely is the Most Likely Case

Even if a deterministic method would generate a most-likely case (by some method other than simply applying the model to the most likely inputs), we would not know how likely it would be to achieve that case (or better). Monte Carlo outputs allow us to estimate the likelihood of achieving any given outcome, so the surprise is avoided at discovering, for example, that bettering your most-likely case is 7 to 2 against.

Not Identifying Driving Variables

Deterministic models do not tell us which of the inputs are important. Sensitivity analysis is the general term for finding out how the output(s) of a model varies with changes in the inputs. Attempting to answer "what if" questions (What if the oil price goes to U.S. $80? What if the project is delayed by a year? What if the rig rate is twice as much as we budgeted for?) gave rise to two forms of sensitivity analysis called tornado charts and spider diagrams, which have been discussed (and their shortcomings mentioned) earlier in this chapter.

Pitfalls of Probabilistic Models

Just as there are shortcomings of deterministic models that can be avoided with probabilistic models, the latter have their associated pitfalls as well. Adding uncertainty, by replacing single estimate inputs with probability distributions, requires the user to exercise caution on several fronts. Without going into exhaustive detail (more will be said on the topic as we present several examples of both Monte Carlo and decision trees), we offer a couple of illustrations.

First, the probabilistic model is more complicated. It demands more documentation and more attention to logical structure. In particular, each iteration of a Monte Carlo model should be a plausible realization. The purpose of using a range of values for each input is to acknowledge the realm of possibilities. Thus, once each of the input distributions is sampled, the resulting case should be sensible, something an expert would agree is possible.

Second, our criticism of the classical sensitivity analysis procedures (tornado charts and spider diagrams) included the notion that some of the inputs would not be independent. Thus, our probabilistic model should address any relationships between variables, which typically are handled by imposing correlation between pairs of input distributions. Each of these coefficients requires a value between –1 and +1; it is the model builder's responsibility to assign and justify these values, which may be based on historical data or experience.

Data Availability and Usefulness

Probabilistic models rely on sensible choices of input distributions. "Garbage in/garbage out" is an often-heard complaint of skeptics and bears acknowledging. While it is true of any model that the results are only as good as the inputs, Monte Carlo models seem to draw more criticism about this aspect. Harbaugh et al.[24] take an extreme position, arguing that one cannot do uncertainty analysis without adequate analogous data. One consultant specializing in Monte Carlo simulation takes another view when he tells his clients, "I don't want to see any data. Instead, I want to build the model first, then do sensitivity analysis and find out what kind of data we really need to start collecting." Somewhere between these extremes lies a sensible position of relying on (1) experience ("I have had the opportunity to study data for this parameter in the past, and while I have no legitimate offset data, I know that under these circumstances the average net pay is slightly skewed right and has a coefficient of variation of about 15%."), (2) fundamental principles ("This input can be viewed as an aggregation, so its distribution must be approximately normal."), (3) appropriate data to form estimates of inputs to a model.

A related problem arises when the data available is simply not appropriate. It is common to collect data from different populations (lumping porosities from different facies, drilling penetration rates at different depths) and for prices in different seasons. Sometimes, simply plotting a histogram of the empirical data reveals bimodal behavior, almost always a sign of mixing samples from different populations. Naturally, data used as a basis for building a distribution should be vetted for measurement and clerical errors. However, one should be wary of tossing out extreme values to make the data look more like a familiar distribution. Rather, one should try to determine how the extreme values came about. They may be your best samples.

Novices always want to know how many data are necessary before one can reliably build a distribution based on them. This is not a simple matter. You may find a quick answer in a statistics text about significance, but in our world, we often do not have an adequate number of samples for statistical significance, and yet we must work the problem. The question comes down to how many points you need to build a "sensible" histogram. Curve-fitting software does not work very well with fewer than 15 points. Rather than relying on some automatic process, one should use common sense and experience. Among other things, one can often guess the distribution type (at least whether it is symmetric or the direction of skewness) and then look to use the minimum and maximum values for P10 and P90 or P5 and P95 as a starting point.

Level of Detail

Often, a problem can be analyzed at various levels of detail. Cost models are a good case in point. In one large Gulf of Mexico deepwater billion-dollar development, the Monte Carlo model had 1,300 line items. Another client built a high-level cost estimate for construction of a floating, production, storage, and offloading (FPSO) vessel with only 12 items. Production forecasts for fields can be done at a single-well level, then aggregated or simply done as a single forecast with a pattern of ramp-up then plateau, followed by decline. Cash-flow models tend to be large when they have small time steps of weeks or months, as opposed to years.

In every case, the model builder must choose a sensible level of detail, much like a person doing numerical reservoir simulation must decide how many gridblocks to include. Among the guidelines are these:

  • Consider building two or more models—one more coarse than the other(s).
  • Consider doing some modeling in stages, using the outputs of some components as inputs to the next stage. This process can lead to problems when there are significant correlations involved.
  • Work at a level of detail where the experts really understand the input variables and where data may be readily accessible.


In the end, common sense and the 80/20 rule apply. You cannot generally have the luxury of making a career out of building one model; you must obtain other jobs and get the results to the decision makers in a timely fashion.

Handling Rare Events

Rare events generally can be modeled with a combination of a discrete variable (Does this event occur or not?) and a continuous variable (When the event occurs, what is the range of possible implications?). Thus, "stuck pipe while drilling" (discussed in detail elsewhere in this chapter) can be described with a binomial variable with n = 1 and p = P(stuck) and "Stuck Time" and perhaps "Stuck Cost" as continuous variables. This method applies as well to downtime, delays, and inefficiencies.

Software Choices

For several years, there have been numerous decision-tree software applications, some dating back to the early 1980s and costing a few hundred to a few thousand dollars per license. Monte Carlo add-ins to Excel have been available since the mid- to late 1980s, with list prices under U.S. $1,000. Full cycle cash-flow models tend to cost tens of thousands of dollars.

Correlation can make a difference in Monte Carlo models. As discussed in Murtha[14]:

"What does correlation do to the bottom line? Does it alter the distribution of reserves or cost or NPV, which is, after all, the objective of the model? If so, how? We can make some generalizations, but remember Oliver Wendell Holmes's admonition, 'No generalization is worth a damn...including this one.' First, a positive correlation between two inputs results in more pairs of two large values and more pairs of two small values. If those variables are multiplied together in the model (e.g., a reserves model), it results in more extreme values of the output. Even in a summation or aggregation model (aggregating production from different wells or fields, aggregating reserves, estimating total cost by summing line items, estimating total time), positive correlation between two summands causes the output to be more dispersed. In short, in either a product model or an aggregation model, a positive correlation between two pairs of variables increases the standard deviation of the output. The surprising thing is what happens to the mean value of the output when correlation is included in the model. For product models, positive correlation between factors increases the mean value of the output. For aggregation models, the mean value of the output is not affected by correlation among the summands. Let us hasten to add that many models are neither pure products nor pure sums, but rather complex algebraic combinations of the various inputs."

Impact of Distribution Type

A standard exercise in Monte Carlo classes is to replace one distribution type with another for several inputs to a model and compare the results. Often, the students are surprised to find that the difference can be negligible. Rather than generalizing, however, it is a good idea to do this exercise, when building a model, prior to the presentation. That is, when there are competing distributions for an input parameter, one should test the effect on the bottom line of running the model with each type of distribution. Simple comparisons on the means and standard deviations of key outputs would suffice, but a convincing argument can be generated by overlying the two cumulative curves of a key output obtained from alternative distributions.

Corporate Policies

Unlike many other technical advances, uncertainty analysis seems to have met with considerable opposition. It is common in companies to have isolated pockets of expertise in Monte Carlo simulation in which the analysis results have to be reduced to single-value estimates. That is, rather than presenting a distribution of reserves, NPV, or drilling cost, only a mean value or a P50 from the respective distribution is reported. It is rare for an entire company to agree to do all their business using the language of statistics and probability.

Reserves Definitions

Because oil/gas corporations report and rely upon booked reserves, the applications of probabilistic and statistical language and concepts toward reserves has become a topic of considerable discussion and some controversy. Among the applications, in this section, are examples dealing with reserves. However, for a more complete discussion of the definitions of reserves and the alignment between deterministic and probabilistic terminology, the reader should consult the chapter on Estimation of Primary Reserves of Crude Oil, Natural Gas, and Condensate by Cronquist and Harrell in the Reservoir Engineering and Petrophysics volume of this Handbook.

Design of Uncertainty Models


Probabilistic models—like any models—benefit from good design. A Monte Carlo model is, in principle, just a worksheet in which some cells contain probability distributions rather than values. Thus, one can build a Monte Carlo model by converting a deterministic worksheet with the help of commercial add-in software. Practitioners, however, soon find that some of their deterministic models were constructed in a way that makes this transition difficult. Redundancy, hidden formulas, and contorted logic are common features of deterministic models that encumber the resulting Monte Carlo model.

Likewise, presentation of results from probabilistic analysis might seem no different from any other engineering presentation (problem statement, summary and conclusions, key results, method, and details). Monte Carlo and decision-tree models, however, demand special considerations during a presentation.

This section describes the features of probabilistic models, outlines elements of a good design, and suggests how to ensure that presentations are effective. For the most part, these comments pertain to Monte Carlo models.

Model: Equations + Assumptions

For our purposes, a model is one or more equations together with assumptions about the way the variables (inputs or outputs) may be linked or restricted. Next, we give some guidelines for model builders.

Specify All Key Equations. For example, N = AhR for volumetric reserves or q = qi × e–at for an exponential decline production forecast. Some models have very simple equations, such as cost estimates where total cost is just an aggregation of line items. Other models have complex structure, such as cash-flow models with multiple production streams, alternative development plans, or intricate timing issues. While some aspects are routine (e.g., revenue = price × volume, cash = revenue – costs), features unique to the problem at hand should be stressed.

Are There Alternative Models? Sometimes there are two or more models that achieve much the same objective. Comparing the model at hand with others familiar to the audience can be useful.

Other Projects That Use This Model. Knowing that other projects have used a model adds credibility and opens the opportunity to learn prices and expenses. While there may be dozens or even hundreds of time steps, the prototype need be mentioned only once.

List All Assumptions. For example:

  • Two successful wells are necessary before field is proved.
  • If field size exceeds 100 Bcf, then a second platform is needed.
  • Gas price is locked according to contract.
  • Success rate on second well increases if first well is commercial.
  • Pipeline has maximum capacity of 50,000 B/D.
  • All reserves must be produced within 15 years.


Input Distributions, Types and Dependency

List All Deterministic Inputs. Although probability distributions occupy center stage in a Monte Carlo model, key deterministic values should be highlighted (e.g., interest rate = 10.5%, start time = 1 January 1996, duration = 10 years).

List All Input Distributions: Type, Defining Parameters, Basis for Choosing This Distribution. Even in large models with hundreds of input distributions, it is essential to identify them all. Large models tend to have multiple parameters of the same kind, which can be typified by one particular variable. For instance, cash-flow models often have a new distribution each time period for prices and expenses. While there may be dozens or even hundreds of time steps, the prototype need be mentioned only once.

Of all the features about the model, the reason for selecting one distribution over another is often a point of discussion that will be raised in a presentation. Each distribution should be identified by type (e.g., normal, log-normal, beta) and by defining parameters (mean and standard deviation, or minimum, mode, maximum). Moreover, the user should explain why the particular distribution was chosen (empirical data that was fit by software, experience, or fundamental principle). The justifications should usually be brief, especially when the user/presenter can state that the particular choice of distribution is not critical to the results. In case other distributions were tested, there should be a comparison between the results available if needed.

Selection of Outputs

In most models, everyone is aware of the natural output(s). In a cost model, we are interested in total cost, but we may also want to know certain subtotals. In reserves models, we want to know the distribution of reserves, but we may want to see the hydrocarbons in place or the breakdown into oil, gas, and liquids. In cash-flow models, we want NPV and perhaps IRR, but we might also want to see production forecasts or cash-flow forecasts, as well as some derived quantities such as cost per barrel, profit to investment ratios, and so on. Bear in mind that an effective presentation focuses on key elements of the results. Too much detail interferes with the bottom line and will risk loss of attention by the audience. The model designer must choose a suitable level of detail.

Sampling Process

Monte Carlo models give the user the option of two types of sampling: one is Monte Carlo and the other is stratified, also called Latin Hypercube sampling. The vast majority of users prefer stratified sampling because the model converges to the desired level in far fewer iterations and, thus, runs faster, allowing the user to do more testing. An example of stratified sampling is to request 100 samples but insist that there is one representative of each percentile. That is, there would be one value between P0 and P1, another between P1 and P2, and so on.

Storage of Iterations

Monte Carlo software gives the user a choice of how much of the input/output data to store and make accessible after the simulation. At one extreme, one can save only the designated outputs (the reserves, NPV, and total cost, for example). At another extreme, one can store all sampled values from the input distributions. Having the inputs available at the end of a run is necessary to do sensitivity analysis, which calculates the rank correlation coefficient between each output array and each input array, as well as stepwise linear regression coefficients (discussed later). Experienced modelers sometimes identify intermediate calculations and designate them as outputs just to make their values available for post-simulation analysis. Murtha[25] discusses "pseudocases" which are constructed from these auxiliary variables. For small models, one can be generous in storing data. As models grow, some discretion may be necessary to avoid long execution times or massive data to file and document.

Sensitivity Analysis

Sensitivity analysis, in essence, is "what if" analysis. As mentioned in Sec. 10.6, classical sensitivity tools are tornado charts and spider diagrams obtained by holding fixed all but one variable and measuring the change in a key output when the remaining input is varied by some specified amount.

Monte Carlo Sensitivity. Monte Carlo models offer a robust form of sensitivity analysis, which usually comes with two choices of metrics: rank correlation and regression. In each case, the objective is to rank the various inputs according to their impact on a specified (target) output.

Rank Correlation Sensitivity Analysis Let Y be an output and X an input for the model. The rank correlation coefficient, rr, between Y and X is a number between –1 and +1. (See the definition and discussion in Sec. 10.4.) The closer rr is to +1 or –1, the more influence X has on Y. Positive correlation indicates that as X increases, Y tends to increase. When rr is negative, Y tends to decrease as X increases. A sample of values appears in Fig. 10.18.

Regression Sensitivity Analysis. Let Y be an output and X1 ,..., Xn be inputs. At the end of the simulation, a stepwise linear regression is done with Y as the dependent variable, generating a set of normalized regression coefficients for the Xs. These coefficients fall between –1 and 1, where a –0.4 for Xi would indicate that Y would decrease by 0.4 standard deviations if X i increased by one standard deviation. Generally speaking, the two methods (correlation and regression) give the same ranking of the inputs.

Decision-Tree Sensitivity. Decision-tree sensitivity analysis relies on the classical sensitivity methods. We select one or two decision-tree inputs, namely probabilities or values, and let them vary over a prescribed range (containing the base value), solving the decision tree for each value. When one value is varied at a time, the resulting data can be displayed graphically as a plot of decision-tree value on the vertical axis and input value on the horizontal axis, with one segmented linear graph for each branch of the root decision node. See Sec. 10.6 for more details and associated figures. When two values are varied simultaneously, the analogous graph requires three dimensions and has the form of a segmented planar surface, which is often hard to display and explain. Alternatively, one can display the two-dimensional grid of pairs of values for the two inputs being varied, coloring them according to which decision branch is optimal.

One can do multiple one-way analyses and show a tornado or spider chart. Still, decision trees have limits to sensitivity analysis. Even more important, some decision trees have probabilities or values on different branches that are not independent. Consequently, users must be cautious when varying any values in the decision tree, ensuring that related values are also varied appropriately. For example, imagine a decision tree with two branches that estimates the cost of handling a kick under different conditions, say whether or not protective pipe has been set. When the value of the kick is changed for the case without the protective casing, it may, in part, be because rig rates are higher than average, which would also make the costs on the other branch greater as well. Again, Sec. 10.6 provides more detail on decision-tree sensitivity analysis.

Analysis and Presentation of Results

Presentation is everything—an overstatement, perhaps, but worth considering. People good at probabilistic analysis face their greatest challenge when presenting results to managers who are not well versed in statistics or analysis techniques but responsible for making decisions based on limited information. Our job is to convey the essential information effectively, which requires finesse, discretion, and focus. Imagine a network newscast, in which time is severely limited and the audience may easily lose interest. Good model design and analysis deserve the best presentation possible. Just recall how a student described an ineffective professor as one who "really knew the material but just didn't communicate with us."

An effective written report should be, at most, three pages long. An oral report should be less than 30 minutes. We list the essential ingredients.

  • State the problem succinctly.
  • Describe the model briefly, noting any unusual assumptions or model features.
  • Show key results, using histograms and cumulative distribution functions (single cell) and probabilistic time series (called trend charts or summary graphs, for production forecasts and cash flows).
  • Display a sensitivity chart with at most 10 or 12 inputs for each important output; consider showing a crossplot of output vs. key input to help explain sensitivity.
  • Use overlays of histograms or cumulative functions to compare alternative plans or solutions.
  • Address correlation among inputs, showing the correlation matrix with a basis for choice of values.
  • Compare probabilistic model results with previous deterministic results for base-case compatibility, and explain any inconsistencies.


A corporate statistician once told us that he is careful with the language he uses in presentations. Instead of a cumulative distribution or probability density function, he uses phrases like "probability vs. value chart." Think of speaking in a foreign language: use simple terms when possible; save the esoteric language for your specialist colleagues, who might be impressed rather than turned off by it.

Future of Technology for the Next Decade

Near-Term Developments

Future development will take place on several fronts:

  • Corporate policies will encourage both a broader range of uncertainty applications and a more consistent use of them throughout the organization. There will be renewed attempts to systematically integrate risk and decision analysis components within companies, while certain specific applications will become routine, namely capital estimation, production forecasts, and economics.
  • Interfaces to Monte Carlo simulation will be developed, in some form, by engineers for reservoir simulation and by engineers and geoscientists for geostatistics, while IT professionals remove virtually all speed barriers to simulation by establishing network-based and web-based parallel processing.
  • There will continue to be adoptions and testing of comprehensive Monte Carlo economics-evaluation models.
  • We will see continued evolution of real options and portfolio analysis.


Longer-Term Developments

In some form we will see:

  • Seamless links between model components.
  • Establishment and possible sharing of databases, which will enable model builders to justify input distributions to uncertainty models.
  • Several corporations embracing the method.
  • Creation of user groups for Monte Carlo software.
  • Lookbacks, through which companies will formalize this process.
  • Steady growth of technical papers.


Knowledge management, an evolving discipline, will play a role in uncertainty analysis. At the core of knowledge management is a belief that data should be used to its fullest, involvement should be extensive, and processes should be improved over time. The objectives of knowledge management are consistent with those of risk analysis and decision making.

Corporate Policies

One major stumbling block to successful corporate implementation of probabilistic methods has been lack of commitment from the top executives. Only when decision makers demand that estimates of cost, value, and time be presented using the language of uncertainty (i.e., using probability distributions) is there any hope for effective use by the engineers, geoscientists, and planners. Gradually, corporate leaders will be forced to recognize this fact. One force at work is the desire of decision makers to employ the latest tools like real options and portfolio optimization, both of which require an understanding of Monte Carlo simulation and decision trees. Another force is the increased use of large, commercial, full-cycle probabilistic models, such as Asset, Spectrum, PEEP, TERAS, GeoX, PROFIT, and PetroVR, the descendants of the cash-flow models of the 1980s and 1990s. Like their small-scale and versatile counterparts, @RISK and Crystal Ball, these models require adoption of a language of probability and statistics.

The Perfect World

The following would be present in an ideal environment for engineers and scientists:

  • The language of basic statistics would be as common as the language of engineering economics.
  • Appropriate databases would be generated, properly maintained, and used to guide parameter modeling.
  • Everyone would know the significance of his/her analyses and how his/her results plugged into the next level of model.
  • All estimates would be ranges; single-number requests would be refused. We, too often, fool ourselves with single numbers and then either force spending or create reasons for missing the target.
  • Budgets would be built on distributions—aggregation, properly done, results in estimates with relatively less dispersion than the individual components.
  • There would be no penalty for coming in "over budget." Performance measures would be probabilistic.


The Role of Tested Modules

Many models serve as templates, which can be used to solve numerous problems with only minor adjustments. Repeated use enhances credibility and increases acceptance. Once a company has built a drilling AFE model, it should be usable for most drilling projects. One of the majors designed a facilities-cost template that was used on every major onshore and offshore project for three years, until it became the victim of a merger. Numerous companies have adopted standardized models for exploration.

This trend should continue indefinitely. Both in-house and commercial models will be adopted and standardized. Competitive forces will lead to off-the-shelf simulation specialty products, currently restricted to large cash-flow applications.

The Role of Tested Modules

Monte Carlo simulation has the potential to be integrated with a wide assortment of other analysis tools. Both developers of the spreadsheet add-in products, Decisioneering Inc. and Palisade Corp., have released toolkits that make the integration relatively simple. A handful of commercial products has already been marketed. Palisade has had, for several years, a version of @RISK for Microsoft Project, and the large-scale project scheduling software, Prima Vera, has had a probabilistic version since the mid-1990s.

In the late 1990s, classical optimization (linear and quadratic programming, for example) was blended with Monte Carlo simulation to yield spreadsheet add-ins called RiskOptimizer and Optquest. Classic forecasting tools have also been enhanced with Monte Carlo simulation in Forecast Pro. Decision trees can be linked to Monte Carlo simulation.

Already mentioned in this section are the popular cash-flow programs, all of which came with probabilistic (Monte Carlo) options in the late 1990s. The large commercial cash-flow models often combine two or more components, such as databases and simulation, simulation and decision trees, and optimization and simulation.

What Can and Is Being Done?

So where does this lead in the future? What is left to integrate? High on the list would be Monte Carlo simulation with both reservoir simulation and geostatistics. At a more modest level, there is room for improvement in how reserves are linked to production forecasts, how AFE and facilities cost estimates are linked to cash flow, and how operating expenses are linked to capital and production.

Reservoir Simulation and Geostatistics. Imagine a numerical reservoir simulator that could execute hundreds or thousands of times while varying key inputs, such as permeability, porosity, hydrocarbon saturations, and relative permeability, as well as timing of wells and alternative development plans over reasonable ranges of values, generating probabilistic production forecasts.

Imagine geostatistics models that would generate distributions of reservoir characteristics, which, in turn, would serve as inputs to Monte Carlo reserves and production forecast models.

Faster Processing. In 2001, both Palisade and Decisioneering released Crystal Ball Turbo and RiskAccelerator to take advantage of parallel processing by several computers distributed over a network, reducing run time by an order of magnitude and, thus, overcoming one of the objections of simulation efforts on large problems, and they are not alone. One company has developed a method of using excess capacity on numerous computers through the Internet by parceling out small tasks and re-assembling the results.

Monte Carlo Simulation as a Toolkit. In addition to stand-alone Monte Carlo software based in spreadsheets, the vendors offer "toolkits" that can add a layer of probability and statistics to a variety of other programs. Microsoft Project (project-scheduling software), for example, can be modified to allow any duration, labor rate, or material price to be a probability distribution rather than a fixed value. When first introduced, @RISK for Project required the use of Excel. Soon, however, one could simply work in Project.

A drilling planning model developed in Australia by CSIRO uses @RISK to address uncertainty in penetration rates and material costs. One commercial cash-flow model used @RISK to handle uncertainty. Since about 1997, both Crystal Ball and @RISK have been linked to several cash-flow models.

Lookbacks. Everyone wants to do lookbacks. Few companies meet their goals. In this context, a lookback is a review of what really happened following a probabilistic analysis that predicted, within some limits, what would happen. Was the well a success? If not, why not? How much did it cost and how long did it take, and why did actual costs or time fall outside a 90% confidence interval? Was there bias in our estimates?

A good model should suggest what kind of data one should collect to refine the model. Over time, review of what actually happened should reveal biases and other weaknesses in the model, the inputs, or the assumptions. In the late 1990s, only a few papers, such as Otis,[26] addressed this question, but many more studies must be presented to assure the results are more than anecdotal. There is an interesting notion in publication of statistical results: only the statistically significant results are published. What about all the other research where the expected correlations are not found? A similar message comes from many companies: the company does a certain type of estimating fairly well and one or more other types of estimating very poorly (i.e., missing ranges and/or often biased). Another story is about a company telling the drilling engineers that from then on, their single-point cost estimate should be unbiased—they would be expected to come in over budget half the time.

Databases. A companion-tool of lookbacks, databases are sources for those parameters used as inputs in Monte Carlo simulation, but they need improvement. Drilling engineers acquire a tremendous amount of data via the morning report. Much of this information could be organized and used for refining cost-estimate models. For example, historically, there has been some reluctance to identify "problem time." In time, in-house databases will be recognized as opportunities to support choices of input distributions and illustrate scenarios while perhaps, led by the large cash-flow models, commercial databases will be marketed.

Nomenclature


A = area, acres; one or a set of mutually exclusive and exhaustive Bayesian-type events
B = another one or a set of mutually exclusive and exhaustive Bayesian-type events, various units
Bg = gas formation volume factor, dimensionless ratio
Bo = oil formation volume factor, dimensionless ratio
C = confidence interval, %, gas content in formula for coalbed methane, scf/acre-ft
CV = coefficient of variation (ratio of standard deviation to mean), dimensionless ratio
di = difference between an actual and predicted value, various units
E = recovery efficiency, %
f = probability density function, various units
f(x) = a probability density function, the derivative of F(x), various units
F(x) = a cumulative distribution function, the integral of f(x), various units
G = gas in place, Mcf (103ft3), MMcf (106ft3), or Bcf(109ft3)
h = net pay, ft
hi = height of histogram, various units
ith = running index to distinguish among values of a random variable, dimensionless
IA = mean investment for prospect A, currency
IB = mean investment for prospect B, currency
m = mean of a symmetric data set, various units
M = median (P50) value, various units
n = number of independent, identical random variables X, various units
N = volumetric oil-in-place, bbl; number of data points, dimensionless; maximum value of a running index, dimensionless
pA = chance of success for prospect A, dimensionless
pB = chance of success for prospect B, dimensionless
P = probability of an event, dimensionless
Px = percentile, %
q = annual production, vol/yr
qi = annual production of the first year, vol/yr
qn = annual production of the nth year, vol/yr
r = correlation coefficient, dimensionless
rr = rank correlation coefficient, dimensionless
R = large constant, various units
S = success event, as in P(S), the probability of success
So = oil saturation, dimensionless ratio
Sw = water saturation, dimensionless ratio
U = utility function based on NPV, currency
V = abbreviation for NPV, currency
Vb = bulk rock volume, vol
x = random variable whose values are being observed, various units
xi = ith of N observed values of a random variable, various units
X = a random variable, various units
Xi = a Monte Carlo simulation input, various units
Xn = the final member of a set of Monte Carlo inputs, various units
yi = height of fitted curve, various units
Y = a cumulative distribution function, also called F(x); a Monte Carlo simulation output, various units
Z = the mean of n independent, identical random variables, various units
βi = sensitivity: fractional change in σ of a Y for a full σ change in Xi, dimensionless ratio
μ = mean, various units
μA = mean of prospect A, various units
μB = mean of prospect B, various units
σ = standard deviation, various units
σA = standard deviation of prospect A, various units
σB = standard deviation of prospect B, various units
φ = porosity, dimensionless ratio

Superscripts


a = annual percentage decline rate, %/yr
t = time, years

References


  1. Tukey, J.W. 1977. Exploratory Data Analysis. Boston, Massachusetts: Addison-Wesley.
  2. Tufte, E.R. 1983. The Visual Display of Quantitative Information, 2nd edition. Chesire, Connecticut: Graphics Press.
  3. 3.0 3.1 Hertz, D.B. 1964. Risk Analysis in Capital Investments. Harvard Business Review 95 (1).
  4. 4.0 4.1 Newendorp, P. 1975. Decision Analysis for Petroleum Exploration, 1st edition. Tulsa, Oklahoma: PennWell Corp.
  5. 5.0 5.1 McCray, A.W. 1975. Petroleum Evaluations and Economic Decisions. Englewood Cliffs, New Jersey: Prentice-Hall Inc.
  6. 6.0 6.1 Megill, R.E. 1977. An Introduction to Risk Analysis. Tulsa, Oklahoma: Petroleum Publishing Co.
  7. Garvey, P.R. 1999. Probability Methods for Cost Uncertainty Analysis. New York City: Marcel Decker.
  8. Bradley, H.B. 1987. Petroleum Engineering Handbook. Richardson, Texas: SPE.
  9. Walstrom, J.E., Mueller, T.D., and McFarlane, R.C. 1967. Evaluating Uncertainty in Engineering Calculations. J Pet Technol 19 (12): 1595-1603. http://dx.doi.org/10.2118/1928-PA.
  10. Smith, M.B. 1968. Estimate Reserves by Using Computer Simulation Method. Oil & Gas J (March 1968): 81.
  11. Smith, M.B. 1970. Probability Models for Petroleum Investment Decisions. J Pet Technol 22 (5): 543-550. http://dx.doi.org/10.2118/2587-PA.
  12. Smith, M.B. 1974. Probability Estimates for Petroleum Drilling Decisions. J Pet Technol 26 (6): 687-695. SPE-4617-PA. http://dx.doi.org/10.2118/4617-PA.
  13. Bayes, M. and Price, M. 1763. An Essay towards Solving a Problem in the Doctrine of Chances. By the Late Rev. Mr. Bayes, F. R. S. Communicated by Mr. Price, in a Letter to John Canton, A. M. F. R. S. Philosophical Transactions (1683-1775) 53: 370-418. http://dx.doi.org/10.2307/105741.
  14. 14.0 14.1 14.2 14.3 Murtha, J.A. 2001. Risk Analysis for the Oil Industry Houston, Texas: Hart Energy. http://www.jmurtha.com/downloads/riskanalysisClean.pdf
  15. Clemen, R.T. and Reilly, T. 2004. Making Hard Decisions with Decision Tools Suite. Boston, Massachusetts: Duxbury Press.
  16. Murtha, J.A. 1994. Incorporating Historical Data Into Monte Carlo Simulation. SPE Comp App 6 (2): 11-17. SPE-26245-PA. http://dx.doi.org/10.2118/26245-PA.
  17. Halton, J.H. 1970. A Retrospective and Prospective Survey of the Monte Carlo Method. SIAM Rev. 12 (1): 1-63.
  18. Caldwell, R.H. and Heather, D.I. 1991. How To Evaluate Hard-To-Evaluate Reserves (includes associated papers 23545 and 23553 ). J Pet Technol 43 (8): 998-1003. SPE-22025-PA. http://dx.doi.org/10.2118/22025-PA.
  19. Murtha, J.A. and Peterson, S.K. 2001. Another Look at Layered Prospects. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 30 September-3 October 2001. SPE-71416-MS. http://dx.doi.org/10.2118/71416-MS.
  20. Murtha, J.A. 1996. Estimating Reserves and Success for a Prospect With Geologically Dependent Layers. SPE Res Eng 11 (1): 37-42. SPE-30040-PA. http://dx.doi.org/10.2118/30040-PA.
  21. Delfiner, P. 2000. Modeling Dependencies Between Geologic Risks in Multiple Targets. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, Texas, 1-4 October 2000. SPE-63200-MS. http://dx.doi.org/10.2118/63200-MS.
  22. Stabell, C.B. 2000. Alternative Approaches to Modeling Risks in Prospects with Dependent Layers. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, Texas, 1-4 October 2000. SPE-63204-MS. http://dx.doi.org/10.2118/63204-MS.
  23. 23.0 23.1 Murtha, J.A. 1997. Monte Carlo Simulation: Its Status and Future. J Pet Technol 49 (4): 361–370. SPE-37932-MS. http://dx.doi.org/10.2118/37932-MS.
  24. Harbaugh, J., Davis, J., and Wendebourg, J. 1995. Computing Risk for Oil Prospects.New York City: Pergamon Press.
  25. Murtha, J.A. 2001. Using Pseudocases to Interpret P10 for Reserves, NPV, and Production Forecasts. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 2-3 April 2001. SPE-71789-MS. http://dx.doi.org/10.2118/71789-MS.
  26. Otis, R.M. and Schneidermann, N. 1997. A process for evaluating exploration prospects. AAPG Bull. 81 (7): 1087–1109.

General References


Abrahamsen, P., Hauge, R., Heggland, K. et al. 1998. Uncertain Cap Rock Geometry, Spill Point, and Gross Rock Volume. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49286-MS. http://dx.doi.org/10.2118/49286-MS.

Aitchison, J. and Brown, J.A.C. 1957. The Lognormal Distribution. London: Cambridge University Press.

Alexander, J.A. and Lohr, J.R. 1998. Risk Analysis: Lessons Learned. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49030-MS. http://dx.doi.org/10.2118/49030-MS.

Baker, R.A. 1988. When is a Prospect or Play Played Out? Oil Gas J. 86 (2): 77-80.

Ball Jr., B.C. and Savage, S.L. 1999. Holistic vs. Hole-istic E&P Strategies. J Pet Technol 51 (9): 74-84. SPE-57701-MS. http://dx.doi.org/10.2118/57701-MS.

Bazerman, M.H. 1990. Judgment in Managerial Decision Making, second edition. New York City: John Wiley & Sons.

Behrenbruch, P., Azinger, K.L., and Foley, M.V. 1989. Uncertainty and Risk in Petroleum Exploration and Development: The Expectation Curve Method. Presented at the SPE Asia-Pacific Conference, Sydney, Australia, 13-15 September 1989. SPE-19475-MS. http://dx.doi.org/10.2118/19475-MS.

Behrenbruch, P., Turner, G.J., and Backhouse, A.R. 1985. Probabilistic Hydrocarbon Reserves Estimation: A Novel Monte Carlo Approach. Presented at the Offshore Europe, Aberdeen, United Kingdom, 10-13 September 1985. SPE-13982-MS. http://dx.doi.org/10.2118/13982-MS.

Billiter, T.C. and Dandona, A.K. 1998. Breaking of a Paradigm: The Simultaneous Production of the Gascap and Oil Column. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49083-MS. http://dx.doi.org/10.2118/49083-MS.

Blehaut, J.-F. 1991. The Assessment of Geological Uncertainties in Development Project Planning. Presented at the SPE Asia-Pacific Conference, Perth. Australia, 4-7 November 1991. SPE-22953-MS. http://dx.doi.org/10.2118/22953-MS.

Bourdaire, J.M., Byramjee, R.J., and Pattinson, R. 1985. Reserve assessment under uncertainty --a new approach. Oil Gas J. 83 (23): 135-140.

Box, R.A. 1990. Math Method Aids Exploration Risk Analysis. Oil & Gas J. (9 July 1990).

Caldwell, R.H. and Heather, D.I. 1991. How To Evaluate Hard-To-Evaluate Reserves (includes associated papers 23545 and 23553 ). J Pet Technol 43 (8): 998-1003. SPE-22025-PA. http://dx.doi.org/10.2118/22025-PA

Capen, E.C. 1993. A Consistent Probabilistic Approach to Reserves Estimates. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 29-30 March 1993. SPE-25830-MS. http://dx.doi.org/10.2118/25830-MS

Carter, P.J. and Morales, E. 1998. Probabilistic Addition of Gas Reserves Within a Major Gas Project. Presented at the SPE Asia Pacific Oil and Gas Conference and Exhibition, Perth, Australia, 12-14 October 1998. SPE-50113-MS. http://dx.doi.org/10.2118/50113-MS

Chewaroungroaj, J., Varela, O.J., and Lake, L.W. 2000. An Evaluation of Procedures to Estimate Uncertainty in Hydrocarbon Recovery Predictions. Presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Yokohama, Japan, 25-26 April 2000. SPE-59449-MS. http://dx.doi.org/10.2118/59449-MS

Claeys, J. and Walkup, G. Jr.: "Discovering Real Options in Oilfield Exploration and Development," paper SPE 52956 presented at the 1999 SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, 21–23 March. http://dx.doi.org/10.2118/52956-MS

Clemen, R.T. and Reilly, T. 2001. Making Hard Decisions with Decision Tools Suite, second edition. Pacific Grove, California: Duxbury Press.

Cronquist, C. 1991. Reserves and Probabilities: Synergism or Anachronism? J Pet Technol 43 (10): 1258-1264. SPE-23586-PA. http://dx.doi.org/10.2118/23586-PA

Damsleth, E., Hage, A., and Volden, R. 1992. Maximum Information at Minimum Cost: A North Sea Field Development Study Using Experimental Design. J Pet Technol 44 (12): 1350–1356. SPE-23139-PA. http://dx.doi.org/10.2118/23139-PA.

Davidson, L.B. and Davis, J.E. 1995. Simple, Effective Models for Evaluating Portfolios of Exploration and Production Projects. Presented at the SPE Annual Technical Conference and Exhibition, Dallas, Texas, 22-25 October 1995. SPE-30669-MS. http://dx.doi.org/10.2118/30669-MS

Davies, G.G., Whiteside, M.W., and Young, M.S. 1991. An Integrated Approach to Prospect Evaluation. Presented at the Offshore Europe, Aberdeen, United Kingdom, 3-6 September 1991. SPE-23157-MS. http://dx.doi.org/10.2118/23157-MS

Dejean, J.-P. and Blanc, G. 1999. Managing Uncertainties on Production Predictions Using Integrated Statistical Methods. Presented at the SPE Annual Technical Conference and Exhibition, Houston, Texas, 3–6 October. SPE-56696-MS. http://dx.doi.org/10.2118/56696-MS

Dhir, R., Jr., R.R.D., and Mavor, M.J. 1991. Economic and Reserve Evaluation of Coalbed Methane Reservoirs. J Pet Technol 43 (12): 1424-1431, 1518. SPE-22024-PA. http://dx.doi.org/10.2118/22024-PA.

Drew, L.J. 1990. Oil and Gas Forecasting—Reflections of a Petroleum Geologist. New York City: Oxford University Press.

Fassihi, M.R., Blinten, J.S., and Riis, T. 1999. Risk Management for the Development of an Offshore Prospect. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52975-MS. http://dx.doi.org/10.2118/52975-MS.

Feller, W. 1968. An Introduction to Probability Theory and its Applications, Vol. I, third edition. New York City: John Wiley.

Feller, W. 1966. An Introduction to Probability Theory and its Applications, Vol. II, third edition. New York City: John Wiley.

Galli, A., Armstrong, M., and Jehl, B. 1999. Comparing Three Methods for Evaluating Oil Projects: Option Pricing, Decision Trees, and Monte Carlo Simulations. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52949-MS. http://dx.doi.org/10.2118/52949-MS

Gatta, S.R. 1999. Decision Tree Analysis and Risk Modeling To Appraise Investments on Major Oil Field Projects. Presented at the Middle East Oil Show and Conference, Bahrain, 20-23 February 1999. SPE-53163-MS. http://dx.doi.org/10.2118/53163-MS

Gilman, J.R., Brickey, R.T., and Red, M.M. 1998. Monte Carlo Techniques for Evaluating Producing Properties. Presented at the SPE Rocky Mountain Regional/Low-Permeability Reservoirs Symposium, Denver, Colorado, 5-8 April 1998. SPE-39926-MS. http://dx.doi.org/10.2118/39926-MS

Grace, J.D., Caldwell, R.H., and Heather, D.I. 1993. Comparative Reserves Definitions: U.S.A., Europe, and the Former Soviet Union (includes associated paper 28020 ). J Pet Technol 45 (9): 866-872. SPE-25828-PA. http://dx.doi.org/10.2118/25828-PA

Gutleber, D.S., Heiberger, E.M., and Morris, T.D. 1995. Simulation Analysis for Integrated Evaluation of Technical and Commercial Risk. J Pet Technol 47 (12): 1062-1067. SPE-30670-PA. http://dx.doi.org/10.2118/30670-PA

Harbaugh, J., Davis, J., and Wendebourg, J. 1995. Computing Risk for Oil Prospects. Oxford: Pergamon.

Harrell, J.A. 1987. The Analysis of Bivariate Association. In Use and Abuse of Statistical Methods in the Earth Sciences, W.B. Size. Oxford: Oxford University Press.

Hefner, J.M. and Thompson, R.S. 1996. A Comparison of Probabilistic and Deterministic Reserve Estimates: A Case Study. SPE Res Eng 11 (1): 43–47. SPE-26388-PA. http://dx.doi.org/10.2118/26388-PA

Hegdal, T., Dixon, R.T., and Martinsen, R. 2000. Production Forecasting of an Unstable Compacting Chalk Field Using Uncertainty Analysis. SPE Res Eval & Eng 3 (3): 189-196. SPE-64296-PA. http://dx.doi.org/10.2118/64296-PA

Hertz, D.B. 1964. Risk Analysis in Capital Investments. Harvard Buisness Review 95 (1): 95–106.

Higgins, J.G. 1993. Planning for Risk and Uncertainty in Oil Exploration. Long Range Planning 26 (1): 111–122.

Hillestad and Goode, P. 1989. Reserves Determination—Implications in the Business World. APEA J 29: 52.

Hogg, R.V. and Craig, A.T. 1965. Introduction to Mathematical Statistics, second edition. New York City: Macmillan.

Holtz, M.H. 1993. Estimating Oil Reserve Variability by Combining Geologic and Engineering Parameters. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 29-30 March 1993. SPE-25827-MS. http://dx.doi.org/10.2118/25827-MS

Howard, R.A. 1988. Decision Analysis: Practice and Promise. Management Science 34 (6): 679-695.

Irrgang, R., Damski, C., Kravis, S. et al. 1999. A Case-Based System to Cut Drilling Costs. Presented at the SPE Annual Technical Conference and Exhibition, Houston, Texas, 3-6 October 1999. SPE-56504-MS. http://dx.doi.org/10.2118/56504-MS

Jensen, T.B. 1998. Estimation of Production Forecast Uncertainty for a Mature Production License. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27–30 September. SPE-49091-MS. http://dx.doi.org/10.2118/49091-MS

Jochen, V.A. and Spivey, J.P. 1996. Probabilistic Reserves Estimation Using Decline Curve Analysis with the Bootstrap Method. Presented at the SPE Annual Technical Conference and Exhibition, Denver, 6–9 October. SPE 36633. http://dx.doi.org/10.2118/36633-MS

Joshi, S., Castanier, L.M., and Brigham, W.E. 1998. Techno-Economic and Risk Evaluation of an EOR Project. Presented at the SPE India Oil and Gas Conference and Exhibition, New Delhi, India, 17-19 February 1998. SPE-39578-MS. http://dx.doi.org/10.2118/39578-MS

Karra, S., Egbogah, E.O., and Yang, F.W. 1995. Stochastic and Deterministic Reserves Estimation in Uncertain Environments. Presented at the SPE Asia Pacific Oil and Gas Conference, Kuala Lumpur, Malaysia, 20-22 March 1995. SPE-29286-MS. http://dx.doi.org/10.2118/29286-MS

Keith, D.R., Wilson, D.C., and Gorsuch, D.P. 1986. Reserve Definitions - An Attempt at Consistency. Presented at the European Petroleum Conference, London, United Kingdom, 20-22 October 1986. SPE-15865-MS. http://dx.doi.org/10.2118/15865-MS.

Kitchel, B.G., Moore, S.O., Banks, W.H. et al. 1997. Probabilistic Drilling-Cost Estimating. SPE Comp App 12 (4): 121–125. SPE-35990-PA. http://dx.doi.org/10.2118/35990-PA

Kokolis, G.P., Litvak, B.L., Rapp, W.J. et al. 1999. Scenario Selection for Valuation of Multiple Prospect Opportunities: A Monte Carlo Play Simulation Approach. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52977-MS. http://dx.doi.org/10.2118/52977-MS

Lia, O., Omre, H., Tjelmel, H. et al. 1997. Uncertainties in Reservoir Production Forecasts. AAPG Bull. 81 (5): 775-802.

Linjordet, A., Nielsen, P.E., and Siring, E. 1997. Heterogeneities Modeling and Uncertainty Quantification of the Gullfaks Sor Brent Formation In-Place Hydrocarbon Volumes. SPE Form Eval 12 (3): 202-207. SPE-35497-PA. http://dx.doi.org/10.2118/35497-PA

Macary, S.M., Hassan, A., and Ragaee, E. 1999. Better Understanding of Reservoir Statistics is the Key for Reliable Monte Carlo Simulation. Presented at the Middle East Oil Show and Conference, Bahrain, 20-23 February 1999. SPE-53264-MS. http://dx.doi.org/10.2118/53264-MS

MacKay, J.A. 1995. Utilizing Risk Tolerance To Optimize Working Interest. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 26-28 March 1995. SPE-30043-MS. http://dx.doi.org/10.2118/30043-MS.

Maharaj, U.S. 1996. Risk Analysis Of Tarsands Exploitation Projects in Trinidad. Presented at the SPE Latin America/Caribbean Petroleum Engineering Conference, Port-of-Spain, Trinidad, 23-26 April 1996. SPE-36124-MS. http://dx.doi.org/10.2118/36124-MS

Martinsen, R., Kjelstadli, R.M., Ross, C. et al. 1997. The Valhall Waterflood Evaluation: A Decision Analysis Case Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38926-MS. http://dx.doi.org/10.2118/38926-MS

Mata, T., Rojas, L., Banerjee, S. et al. 1997. Probabilistic Reserves Estimation of Mara West Field, Maracaibo Basin, Venezuela: Case Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38805-MS. http://dx.doi.org/10.2118/38805-MS

McCray, A.W. 1975. Petroleum Evaluations and Economic Decisions. Englewood Cliffs, New Jersey: Prentice-Hall Inc.

McLellan, P.J. and Hawkes, C.D.: "Application of Probabilistic Techniques for Assessing Sand Production and Borehole Instability Risks," paper SPE 47334 presented at the 1998 SPE/ISRM Rock Mechanics in Petroleum Engineering Conference, Trondheim, Norway, 8–10 July. http://dx.doi.org/10.2118/47334-MS

McNutt, P.B., Pittaway, K.R., Rosato, R.J. et al. 1994. A Probabilistic Forecasting Method for the Huntley CO2 Projects. Presented at the SPE/DOE Improved Oil Recovery Symposium, Tulsa, Oklahoma, 17-20 April 1994. SPE-27762-MS. http://dx.doi.org/10.2118/27762-MS

Megill, R.E. 1977. An Introduction to Risk Analysis. Tulsa, Oklahoma: Petroleum Publishing Company.

Megill, R.E. Evaluating & Managing Risk— A Collection of Readings. Tulsa, Oklahoma: SciData Publishing.

Mishra, S. 1998. Alternatives to Monte-Carlo Simulation for Probabilistic Reserves Estimation and Production Forecasting. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49313-MS. http://dx.doi.org/10.2118/49313-MS

Moore, K.S., Cockcroft, P.J., and Prasser, R. 1995. Applications of Risk Analysis in Petroleum Exploration and Production Operations. Presented at the SPE Asia Pacific Oil and Gas Conference, Kuala Lumpur, Malaysia, 20-22 March 1995. SPE-29254-MS. http://dx.doi.org/10.2118/29254-MS

Moore, L.R. and Mudford, B.S. 1999. Probabilistic Play Analysis From Geoscience to Economics: An Example From the Gulf of Mexico. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52955-MS. http://dx.doi.org/10.2118/52955-MS

Murtha, J.A. 2002. Sums and Products of Distributions: Rules of Thumb and Applications. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 29 September-2 October 2002. SPE-77422-MS. http://dx.doi.org/10.2118/77422-MS

Murtha, J.A. and Peterson, S.K. 2001. Another Look at Layered Prospects. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 30 September-3 October 2001. SPE-71416-MS. http://dx.doi.org/10.2118/71416-MS

Murtha, J.A. 2001. Risk Analysis for the Oil Industry. Houston, Texas: Hart Energy. http://www.jmurtha.com/downloads/riskanalysisClean.pdf

Murtha, J.A. 2001. Using Pseudocases to Interpret P10 for Reserves, NPV, and Production Forecasts. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 2-3 April 2001. SPE-71789-MS. http://dx.doi.org/10.2118/71789-MS

Murtha, J.A. 1997. Monte Carlo Simulation: Its Status and Future. J Pet Technol 49 (4): 361–370. SPE-37932-MS. http://dx.doi.org/10.2118/37932-MS

Murtha, J.A. 1997. Risk and Decision Analysis Software. SPECA (August).

Murtha, J.A. 1996. Estimating Reserves and Success for a Prospect With Geologically Dependent Layers. SPE Res Eng 11 (1): 37-42. SPE-30040-PA. http://dx.doi.org/10.2118/30040-PA

Murtha, J.A. and Janusz, G.J. 1995. Spreadsheets generate reservoir uncertainty distributions. Oil Gas J. 93 (11): 87-91.

Murtha, J.A. 1994. Incorporating Historical Data Into Monte Carlo Simulation. SPE Comp App 6 (2): 11-17. SPE-26245-PA. http://dx.doi.org/10.2118/26245-PA

Murtha, J.A. 1987. Infill Drilling in the Clinton: Monte Carlo Techniques Applied to the Material Balance Equation. Presented at the SPE Eastern Regional Meeting, Pittsburgh, Pennsylvania, 21-23 October 1987. SPE-17068-MS. http://dx.doi.org/10.2118/17068-MS

Nakayama, K. 2000. Estimation of Reservoir Properties by Monte Carlo Simulation. Presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Yokohama, Japan, 25-26 April 2000. SPE-59408-MS. http://dx.doi.org/10.2118/59408-MS

Nangea, A.G. and Hunt, E.J. 1997. An Integrated Deterministic/Probabilistic Approach to Reserve Estimation: An Update. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38803-MS. http://dx.doi.org/10.2118/38803-MS

National Petroleum Council. 1984. Enhanced Oil Recovery National Petroleum Council.

Newendorp, P.D. and Quick, A.N. 1987. The Need for More Reliable Decision Criteria for Drilling Prospect Analyses. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 2-3 March 1987. SPE-16318-MS. http://dx.doi.org/10.2118/16318-MS

Newendorp, P. 1975. Decision Analysis for Petroleum Exploration. Tulsa, Oklahoma: PennWell.

Ovreberg, O., Damaleth, E., and Haldorsen, H.H. 1992. Putting Error Bars on Reservoir Engineering Forecasts. J Pet Technol 44 (6): 732-738. SPE-20512-PA. http://dx.doi.org/10.2118/20512-PA

Padua, K.G.O. 1998. Probabilistic Performance of Non-Conventional Wells in Deep Water and Amazon Jungle Fields in Brazil. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49036-MS. http://dx.doi.org/10.2118/49036-MS

Patricelli, J.A. and McMichael, C.L. 1995. An Integrated Deterministic/Probabilistic Approach to Reserve Estimations. J Pet Technol 47 (1): 49–53. SPE-28329-PA. http://dx.doi.org/10.2118/28329-PA

Peterson, S.K., Murtha, J.A., and Roberts, R.W. 1995. Drilling Performance Predictions: Case Studies Illustrating the Use of Risk Analysis. Presented at the SPE/IADC Drilling Conference, Amsterdam, The Netherlands, 28 February –2 March. SPE-29364-MS. http://dx.doi.org/10.2118/29364-MS

Peterson, S.K., Murtha, J.A., and Schneider, F.F. 1993. Risk Analysis and Monte Carlo Simulation Applied to the Generation of Drilling AFE Estimates. Presented at the SPE Annual Technical Conference and Exhibition, Houston, Texas, 3–6 October. SPE-26339-MS. http://dx.doi.org/10.2118/26339-MS

Purvis, D.C., Strickland, R.F., Alexander, R.A. et al. 1997. Coupling Probabilistic Methods and Finite Difference Simulation: Three Case Histories. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38777-MS. http://dx.doi.org/10.2118/38777-MS

Quick, A.N. and Buck, N.A. 1983. Strategic Planning for Exploration Management. Boston, Massachusetts: IHRDC.

Ross, J.G. 1994. Discussion of Comparative Reserves Definitions: U.S.A., Europe, and the Former Soviet Union. J Pet Tech 46 (8): 713.

Santos, R. and Ehrl, E. 1995. Combined methods improve reserve estimates. Oil Gas J. 93 (18): 112-118.

Schuyler, J.R. 1998. Probabilistic Reserves Lead to More Accurate Assessments. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49032-MS. http://dx.doi.org/10.2118/49032-MS

Serpen, U., Alpkaya, E.N., and Ozkan, E. 1998. Preliminary Investigation of Coalbed Methane Potential of the Zonguldak Basin in Turkey. Presented at the SPE Gas Technology Symposium, Calgary, Alberta, Canada, 15-18 March 1998. SPE-39985-MS. http://dx.doi.org/10.2118/39985-MS

Shivers III, R.M. and Domangue, R.J. 1993. Operational Decision Making for Stuck-Pipe Incidents in the Gulf of Mexico: A Risk Economics Approach. SPE Drill & Compl 8 (2): 125-130. SPE-21998-PA. http://dx.doi.org/10.2118/21998-PA

Siring, E. 1994. A System for Estimating Uncertainties in Hydrocarbon Pore Volume. Presented at the European Petroleum Computer Conference, Aberdeen, United Kingdom, 15-17 March 1994. SPE-27568-MS. http://dx.doi.org/10.2118/27568-MS

Smith, M.B. 1974. Probability Estimates for Petroleum Drilling Decisions. J Pet Technol 26 (6): 687-695. SPE-4617-PA. http://dx.doi.org/10.2118/4617-PA.

Smith, P.J., Hendry, D.J., and Crowther, A.R. 1993. The Quantification and Management of Uncertainty in Reserves. Presented at the SPE Western Regional Meeting, Anchorage, Alaska, 26-28 May 1993. SPE-26056-MS. http://dx.doi.org/10.2118/26056-MS

Spencer, J.A. and Morgan, D.T.K. 1998. Application of Forecasting and Uncertainty Methods to Production. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 27-30 September 1998. SPE-49092-MS. http://dx.doi.org/10.2118/49092-MS

Spouge, J.R. 1991. CRASH: Computerised Prediction of Ship-Platform Collision Risks. Presented at the Offshore Europe, Aberdeen, United Kingdom, 3-6 September 1991. SPE-23154-MS. http://dx.doi.org/10.2118/23154-MS

Van Horn, G.H. 1970. Gas Reserves, Uncertainty, and Investment Decisions. Presented at the Gas Industry Symposium, Omaha Nebraska, 21-22 May 1970. SPE-2878-MS. http://dx.doi.org/10.2118/2878-MS

Walstrom, J.E., Mueller, T.D., and McFarlane, R.C. 1967. Evaluating Uncertainty in Engineering Calculations. J Pet Technol 19 (12): 1595-1603. http://dx.doi.org/10.2118/1928-PA

Wiggins, M.L. and Zhang, X. 1994. Using PC's and Monte Carlo Simulation To Assess Risk in Workover Evaluations. SPE Comp App 6 (3): 19-23. SPE-26243-PA. http://dx.doi.org/10.2118/26243-PA

Wisnie, A.P. and Zhu, Z. 1994. Quantifying Stuck Pipe Risk in Gulf of Mexico Oil and Gas Drilling. Presented at the SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 25-28 September 1994. SPE-28298-MS. http://dx.doi.org/10.2118/28298-MS

Wright, J.D. 1997. Actual Performance Compared to a 20 Year Old Probabilistic Reserve Estimate. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38802-MS. http://dx.doi.org/10.2118/38802-MS

Zhang, D., Li, L., and Tchelepi, H.A. 1999. Stochastic Formulation for Uncertainty Assessment of Two-Phase Flow in Heterogeneous Reservoirs. Presented at the SPE Reservoir Simulation Symposium, Houston, Texas, 14-17 February 1999. SPE-51930-MS. http://dx.doi.org/10.2118/51930-MS

Glossary


American Option - a form of financial option that allows the purchase on or before the exercise date.

Anderson-Darling - one of three common goodness-of-fit metrics used when fitting probability distributions to data (the others being the chi-square and Kolmogorov-Smirinov), which places additional emphasis on the fit at the extreme values of the distribution.

Average - another word for mean or arithmetic mean, obtained by summing N numbers and dividing by N.

AVERAGE - the Excel function that returns the average of a row or column of data.

Bayes' Theorem - a two-part theorem relating conditional probability to unconditional (prior) probability, used in value of information problems but also important to acknowledge when estimating probabilities for geologically dependent prospects.

Bimodal - having two modes, a property ascribed to certain histograms or probability den-sity functions.

Call Option - a form of a financial option which entitles the owner to purchase one share of a commodity at a specific (strike) price on or before a specific (exercise) date.

Cap and cup - these are nicknames for the Venn-diagram-style symbols for intersection (the "and" operator) and union (the "or" operator), namely ∩ and ∪ , which are not used here, but note that AB = A&B.

Certainty equivalent - a value obtained by trial and error with an individual or a group, toward which he/they would be indifferent between this value and a chance event.

Chi-square - one of three metrics used to judge the goodness of fit between data and a density function, specifically, Σ(1/yi)(yifi)2 , where fi is the class frequency from the histogram, and yi is the value of the density function taken at the class midpoint. Chi-square is also a classical random variable, like a normal or log-normal variable.

Conditional probability - the revised probability of event B, given that another event, A, occurs, notated P(B|A).

Confidence interval (CI) - the 90% confidence interval for a variable X is the range of values between P5 and P95. Similarly the 80% confidence interval is the range from P10 to P90. Confidence intervals are used as subranges of practical interest, representing where the value of X will fall the vast majority of the time.

Contigency - cost engineers offer the following definition: cost contingency is the amount of additional money, above and beyond the base cost, that is required to ensure the project's success. More generally, contingency is an additional amount set aside for routine cost overruns or for things that were not accounted for. Some companies specify the difference between a budgeted amount and a high percentile such as P85 or P90 as a contingency.

Continuous - one of two types of random variables (the other being discrete) having the property that the domain is a continuous interval on the real line (e.g., normal or triangular, or log-normal).

Counting Technique - a method of estimating probabilities, including conditional probabilities, from empirical data, namely by taking ratios of the number of times a given event occurred with the number of times it could have occurred.

Cumulative distribution function (CDF) - a graph the horizontal axis of which is a variable X and the vertical axis of which ranges from 0 to 1. There are two types: ascending and descending. In an ascending CDF, a point (x, y) indicates that the probability that X is less than or equal to x is y. For a descending CDF, a point (x, y) indicates that the probability that X is greater than or equal to x is y. Any probability density function (PDF) can be integrated to yield the corresponding (ascending) CDF. Thus, the derivative of an ascending CDF is the corresponding PDF for the variable X.

Cumulative distributions - the integral of a density function. The functional relationship between cumulative probability (on the vertical axis) and value.

Decision tree - a pictorial device, consisting of nodes and branches, that describes two or more courses of action and the resulting uncertainties with probabilities of occurrence, as well as possible subsequent actions and uncertainties. The solution to the tree consists of a preferred course of action or path along the tree, together with the resulting expected value.

Descriptive statistics - a collection of numbers, each called a statistic, such as mean, median, standard deviation, and skewness, that describes a set of data; also the process of calculating these numbers.

Deterministic model - a model for which every input variable, and hence each output variable as well, is given exactly one value, in contrast to a probabilistic model.

Discrete - applied to a random variable, having either a finite or a countably infinite range of possible values, such as the binomial or Poisson variables.

Discretization - converting a continuous distribution to a discrete distribution by subdividing the X -range. The discrete variable has values equal to the means of the subranges and probabilities equal to the chance that the variable would fall into that subrange.

European option - a form of financial option that requires that the purchase be made on the exercise date; expected value.

Financial option - one of two basic types (puts and calls) of financial instruments entitling the owner to sell (put) or buy (call) one share of a commodity for a specified price on or before a specified date.

Framing the problem - a process during which a problem is described in sufficient detail for the group involved to agree that the description is unambiguous.

Goodness of fit - a type of metric used to quantify how close a density function approximates a histogram.

Histograms - a column chart based on classes or bins of equal width. The height of the bars indicates either the frequency or the relative frequency associated with the data falling into the given class.

Joint probability - the probability that both of two events, say A and B, will occur in symbols P(A&B).

Kolmogorov-Smirinov - one of three common goodness-of-fit metrics used when fitting probability distributions to data (the others being the chi-square and Anderson-Darling), which uses numerical integration but otherwise is similar to the chi-square test.

Kurtosis - a statistic that measures peakedness of a density function. A normal curve has kurtosis of 0.

Linear programming - a form of mathematical programming in which the objective function is a linear combination of the independent variables. The solution technique is called the simple method because it can be viewed as a search along the edges of a hypercube.

Mean - for data, the statistic obtained by summing N data points then dividing by N. Also, called average, arithmetic average, expected value. For a density function, f, the average integral of x × f (x).

Measures of central tendency - a group of statistics, most commonly mean, median, and mode but also geometric and harmonic mean, which represent typical values of the data.

Median - for data, a statistic obtained by sorting N data then selecting the middle one, number (N + 1)/2 in the list, if N is odd and the average of the two middle ones, numbers N/2 and N/2 + 1 if N is even. For a density function, the value m for which P(X < m) = 0.5.

MEDIAN - the Excel function that returns the median for a row or column of data.

Modal class - in a histogram, the class with the highest frequency.

Mode - for data, the number that occurs with the highest frequency. For a density function, the value for which the frequency function attains its maximum.

MODE - the Excel function that returns the mode of a row or column of data.

Monte Carlo simulation - the principal tool of risk analysis, in which input variables are assumed to be density functions. Hundreds or thousands of trials are executed, sampling from these inputs and evaluating one or more outputs. At the end of the simulation, descriptive statistics are given, and histograms and cumulative functions are exhibited for each output and a sensitivity chart prioritizes the inputs for a given output.

Multimodal - having more than one mode. Thus, for data, having two or more values with the highest frequency; for a density function, having two or more relative maxima.

Percentile - value indicating that a corresponding percentage of data or probability is less than or equal to this number, in symbols P10 for the tenth percentile. Thus, P(X <= P10) = 10%.

Perfect information - in value of information problems, the ideal case in which the predicting device makes no mistakes. Thus, for a given state of nature, the information always accurately predicts it.

Population - the entire collection of objects under consideration. Often, the population is the underlying theoretical distribution from which a sample is drawn.

Probability density function (PDF) - a graph of a random variable X, the vertical axis of which represents probability density, while its horizontal axis represents the variable. The PDF is the derivative of a cumulative distribution function. Thus, the area beneath the PDF graph to the left of a given value a is P(X <= a); the area under the curve between a and b represents the probability that X falls between a and b; and of course the area under the entire curve totals 1.0. The familiar bell-shaped normal curve is a PDF.

Probability distributions - either a density function or a cumulative distribution (i.e., a functional relationship between a random variable X and its probability).

Put option - a form of a financial option that entitles the owner to sell one share of a commodity at a specific (strike) price on or before a specific (exercise) date.

Random variable - any variable X having an associated probability distribution.

RAROC - acronym for risk adjusted return on capital. One of many metrics pertaining to risk; specifically the ratio of the mean NPV to the difference between the mean and the P5 value.

Real options - the right to take an action at a specified cost for a specified duration. Similar to financial options but with a variety of actions, often having to do with either changing the scope or the timing of the project.

Risk analysis - a general term encompassing Monte Carlo simulation, decision trees, and general uncertainty analysis.

Risk premium - the difference between the certainty equivalent (from a decision tree using a utility function) and the expected value of the original (value laden) tree.

Root decision node - the choice node in a decision tree from which all branches emanate. The node the value of which is calculated when the tree is solved.

Sample - a collection of objects or numbers drawn from a population. Often, a sample is the set of empirical data that we analyze, hoping to infer statistical properties of the population.

Sensitivity analysis - a process to rank the inputs to a model in terms of their impact on a given output.

Spider diagrams - a graph showing the percentage changes in an output of a model corresponding to percentage changes in various inputs. The graph for each input resembles a spider leg, being piecewise linear.

Strike price - the agreed-upon price at which a share of stock can be bought or sold by the holder of an option.

States of nature - in value of information problems, the various possible outcomes one is trying to assess.

Statistics - a collection of values associated with a sample, such as mean or standard devi-ation. Also called descriptive statistics. The formal study of such values.

Stem and leaf diagram - Tukey's method of visualizing data: the stem is an ordered column of the numbers formed by truncating the data by one or two digits, while the leaves are the truncated digits, lined up to the right of their number.

Stockout - a condition in inventory theory in which the commodity level falls to 0, often causing loss of business or stockout cost.

Tornado diagrams - a bar graph ranking the impact of various inputs of a model on a specific output. The classical tornado chart is obtained by fixing all but one input at some base value and letting the one input vary from its minimum to maximum. A similar graph is used in Monte Carlo simulation software, in which the bar widths represent either rank correlation coefficients or stepwise linear regression coefficients.

Types of information - in a value-of-information decision-tree problem, the various forms the information can take, such as "it will rain today" or "the structure appears to be closed."

Uncertainty node - in a decision tree, a node the emanating branches of which represent the various possible outcomes of some event with their values and probabilities of occurrence.

Univariate statistics - see "descriptive statistics."

Utility functions - a relationship that assigns a unique value in a normalized (0,1) range to each possible value of some monetary range, to account for levels of risk aversion.

Value at Risk (VAR) - one of several metrics pertaining to risk. While this is not well defined, it usually represents the difference between some base case (perhaps a budget value) and an almost-worst-case scenario (perhaps P5).

Value of information - a method using decision trees to place a value on the acquisition of some form of predicting information, such as a well test, a pilot study, or a 3D seismic interpretation.

Value of the option - a function of the value of the asset and how it is projected to change, the exercise price, the strike date, and the risk free interest rate.

SI Metric Conversion Factors


acre × 4.046 873 E + 03 = m2
bbl × 1.589 873 E – 01 = m3
°F (°F – 32)/1.8 = °C
ft × 3.048* E – 01 = m
in. × 2.54* E + 00 = cm


*

Conversion factor is exact.

Appendix-Risk Analysis for the Oil Industry

The following appendix is reprinted with permission from Hart's E&'P.