You must log in to edit PetroWiki. Help with editing

Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information


Risk and decision analysis: Difference between revisions

PetroWiki
Jump to navigation Jump to search
(Added webinar)
No edit summary
Line 1: Line 1:
The oil and gas industry invests significant money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.


The oil and gas industry invests significant money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.  
Risk and decision analysis software is as diverse as the analysis methods themselves. There are programs to do [[Monte_Carlo_simulation|Monte Carlo simulation]] and [[Decision_tree_analysis|decision tree analysis]]. Analytic models to do economics can be linked to both Monte Carlo simulation and decision trees. Closely related are optimization, sensitivity analysis, and influence diagrams. Extending further, we encounter forecasting, expert systems, and fuzzy logic. Within geoscientists’ purview are mapping packages and geostatistics software, both of which have the potential to offer strong support to the analysis of uncertainty.


Risk and decision analysis software is as diverse as the analysis methods themselves. There are programs to do [[Monte Carlo simulation]] and [[decision tree analysis]]. Analytic models to do economics can be linked to both Monte Carlo simulation and decision trees. Closely related are optimization, sensitivity analysis, and influence diagrams. Extending further, we encounter forecasting, expert systems, and fuzzy logic. Within geoscientists’ purview are mapping packages and geostatistics software, both of which have the potential to offer strong support to the analysis of uncertainty.
== Language of risk analysis and decision making ==


==Language of risk analysis and decision making==
Any description of Monte Carlo simulation and decision trees must devote some time to the underpinnings of statistics and probability. Undergraduate engineering programs sometimes include one course in statistics, and graduate programs often require one. Unfortunately, what engineers take away from those classes does not always prepare them to deal with uncertainty analysis. For whatever reason, engineers do not gain a level of comfort with the language nor see immediate use for it in their jobs.
Any description of Monte Carlo simulation and decision trees must devote some time to the underpinnings of statistics and probability. Undergraduate engineering programs sometimes include one course in statistics, and graduate programs often require one. Unfortunately, what engineers take away from those classes does not always prepare them to deal with uncertainty analysis. For whatever reason, engineers do not gain a level of comfort with the language nor see immediate use for it in their jobs.  
 
[[Statistical_concepts_in_risk_analysis|Statistical concepts in risk analysis]] introduces the concepts of:


[[Statistical concepts in risk analysis]] introduces the concepts of:
*Central tendency (mean, mode, and median)
*Central tendency (mean, mode, and median)
*Dispersion (standard deviation, ranges, and confidence intervals)
*Dispersion (standard deviation, ranges, and confidence intervals)
*Skewness
*Skewness
*The graphical tools (histograms, density functions, and cumulative distributions) necessary to communicate ideas of uncertainty about a single variable  
*The graphical tools (histograms, density functions, and cumulative distributions) necessary to communicate ideas of uncertainty about a single variable


Correlation and regression, especially the former, serve to describe the relationship between two parameters. We use Excel to illustrate these descriptive statistics.  
Correlation and regression, especially the former, serve to describe the relationship between two parameters. We use Excel to illustrate these descriptive statistics.


This section clarifies what it means to fit historical data. The premise is that we usually have a small sample taken from a huge population, which we wish to describe. The process begins by constructing a histogram from the data and then seeking a density function that resembles the histogram. This statistical tool contrasts sharply with the well-known linear regression, in spite of the fact that their metrics to judge the goodness of fit appear similar.  
This section clarifies what it means to fit historical data. The premise is that we usually have a small sample taken from a huge population, which we wish to describe. The process begins by constructing a histogram from the data and then seeking a density function that resembles the histogram. This statistical tool contrasts sharply with the well-known linear regression, in spite of the fact that their metrics to judge the goodness of fit appear similar.


Three common distribution types—normal, log-normal, and binomial—are discussed at length to assist users in choosing an appropriate type when building a model. The central limit theorem establishes guidelines about sums and products of distributions. A cousin of statistics, probability theory, paves the way to introduce Bayes’ theorem, which is invoked in prospect evaluation to ensure consistent logic for revising probabilities.
Three common distribution types—normal, log-normal, and binomial—are discussed at length to assist users in choosing an appropriate type when building a model. The central limit theorem establishes guidelines about sums and products of distributions. A cousin of statistics, probability theory, paves the way to introduce Bayes’ theorem, which is invoked in prospect evaluation to ensure consistent logic for revising probabilities.


==The tools of the trade==
== The tools of the trade ==
[[Monte Carlo simulation]] and [[Decision tree analysis|decision trees]] are defined and illustrated, compared and contrasted. Some problems yield to one or the other of these tools. Occasionally, both methods can serve a useful purpose. Decision trees are visual. Their impact diminishes as the model becomes larger and more complex. Decision trees rely on expected value, but decision makers do not always do the same, which brings about the notion of utility functions. Decision trees have their unique form of sensitivity analysis, limited to tweaking one or two variables at a time. Solutions to decision trees consist of a recommended path or choice of action and an associated expected value.  
 
[[Monte_Carlo_simulation|Monte Carlo simulation]] and [[Decision_tree_analysis|decision trees]] are defined and illustrated, compared and contrasted. Some problems yield to one or the other of these tools. Occasionally, both methods can serve a useful purpose. Decision trees are visual. Their impact diminishes as the model becomes larger and more complex. Decision trees rely on expected value, but decision makers do not always do the same, which brings about the notion of utility functions. Decision trees have their unique form of sensitivity analysis, limited to tweaking one or two variables at a time. Solutions to decision trees consist of a recommended path or choice of action and an associated expected value.


Monte Carlo models do not result in a recommended course of action. Rather they make estimates, providing ranges rather than single values like deterministic models. Their scope is broad, ranging from simple estimates of oil and/or gas reserves with volumetric formulas to full-scale field development. These models and the subsequent analysis and presentation show the wide range of possible outcomes and the probability of each.
Monte Carlo models do not result in a recommended course of action. Rather they make estimates, providing ranges rather than single values like deterministic models. Their scope is broad, ranging from simple estimates of oil and/or gas reserves with volumetric formulas to full-scale field development. These models and the subsequent analysis and presentation show the wide range of possible outcomes and the probability of each.


[[Decision analysis: additional tools|Additional tools]] such as optimization and options may also be useful.
[[Decision_analysis:_additional_tools|Additional tools]] such as optimization and options may also be useful.


Among the issues raised by practitioners of risk analysis are “Why should we be doing this?” and “Now that we are doing it, are we doing it right?” Both of these questions are addressed by identifying [[Problems with deterministic models|pitfalls of deterministic models]] (to see why we should migrate toward probabilistic methods) and [[Challenges with probabilistic models|pitfalls of probabilistic models]] (to see how we might go astray here).
Among the issues raised by practitioners of risk analysis are “Why should we be doing this?” and “Now that we are doing it, are we doing it right?” Both of these questions are addressed by identifying [[Problems_with_deterministic_models|pitfalls of deterministic models]] (to see why we should migrate toward probabilistic methods) and [[Challenges_with_probabilistic_models|pitfalls of probabilistic models]] (to see how we might go astray here).
 
== Typical applications of technologies ==


==Typical applications of technologies==
Monte Carlo simulation models include:
Monte Carlo simulation models include:
*[[Cost and time estimates|Capital costs/authority for expenditure (AFE) development]]
*[[Resources and reserves models|Reserve estimates]]
*[[Production forecasts|Production forecasts and cash flow]]


==Design of uncertainty models==
*[[Cost_and_time_estimates|Capital costs/authority for expenditure (AFE) development]]
A proper start in risk analysis requires investing time in the design of a model. [[Design of uncertainty models]] steps through the principal components of a Monte Carlo model:  
*[[Resources_and_reserves_models|Reserve estimates]]
*[[Production_forecasts|Production forecasts and cash flow]]
 
== Design of uncertainty models ==
 
A proper start in risk analysis requires investing time in the design of a model. [[Design_of_uncertainty_models|Design of uncertainty models]] steps through the principal components of a Monte Carlo model:
 
*Explicit equations and assumptions
*Explicit equations and assumptions
*A list of key input distributions
*A list of key input distributions
Line 43: Line 49:
*Laying the groundwork for an effective presentation
*Laying the groundwork for an effective presentation


==History of risk analyses within oil/gas industry==
== History of risk analyses within oil/gas industry ==
Uncertainty analysis evolved during the latter half of the 20th century. Its underpinnings in statistics and probability were in place by 1900. Problem solving, especially in industrial engineering and operations research, was introduced in midcentury, following more theoretical modeling in physics, chemistry, and mathematics in the early 1900s. The computer revolution, and in particular the availability of desktop computers and spreadsheet programs in the 1980s and 1990s, supplied the final ingredient.


Of course, there had to be motivation and hard problems to solve. Oil/gas companies became more technical, and competition for funds demanded analysis of profitability. Numerical simulation methods such as reservoir and geostatistical models became established tools, making it easier to argue for Monte Carlo and decision tree tools.  
Uncertainty analysis evolved during the latter half of the 20th century. Its underpinnings in statistics and probability were in place by 1900. Problem solving, especially in industrial engineering and operations research, was introduced in midcentury, following more theoretical modeling in physics, chemistry, and mathematics in the early 1900s. The computer revolution, and in particular the availability of desktop computers and spreadsheet programs in the 1980s and 1990s, supplied the final ingredient.
 
Of course, there had to be motivation and hard problems to solve. Oil/gas companies became more technical, and competition for funds demanded analysis of profitability. Numerical simulation methods such as reservoir and geostatistical models became established tools, making it easier to argue for Monte Carlo and decision tree tools.
 
=== Origins ===


===Origins===
Risk analysis did not simply spring forth in full bloom in the mid-20th century. Among its progenitors were the 17th- and 18th- century origins of probability theory in the context of:
Risk analysis did not simply spring forth in full bloom in the mid-20th century. Among its progenitors were the 17th- and 18th- century origins of probability theory in the context of:
*Games of chance, probability, and statistics formalism from the late 19th century
*Games of chance, probability, and statistics formalism from the late 19th century
*The problem-solving and modeling interests that led to operations research, industrial engineering, and general applied mathematics
*The problem-solving and modeling interests that led to operations research, industrial engineering, and general applied mathematics
Line 55: Line 64:


Although some notable contributions to probability and statistics appeared much earlier (Cardano, Galileo, Gauss, Fermat, the Bernoulis, De Moivre, Bayes), it was not until the end of the 19th century that statistics became formalized with pioneers like:
Although some notable contributions to probability and statistics appeared much earlier (Cardano, Galileo, Gauss, Fermat, the Bernoulis, De Moivre, Bayes), it was not until the end of the 19th century that statistics became formalized with pioneers like:
*Galton (percentiles, eugenics)
*Galton (percentiles, eugenics)
*Pearson (chi-square test, standard deviation, skewness, correlation)
*Pearson (chi-square test, standard deviation, skewness, correlation)
*Spearman (rank correlation, applications in social sciences)
*Spearman (rank correlation, applications in social sciences)


The Royal Statistical Society was founded in 1834, The American Statistical Association in 1839, the Statistics Sweden in 1858, and La Société de Statistique de Paris (SSP) in 1860.  
The Royal Statistical Society was founded in 1834, The American Statistical Association in 1839, the Statistics Sweden in 1858, and La Société de Statistique de Paris (SSP) in 1860.


During the early and mid-19th century, statistics focused on population. Statistics was a mature science by the early 20th century, though the field has advanced mightily since then. Gossett introduced the t-distribution in 1908. R.A. Fisher made several advances, including:
During the early and mid-19th century, statistics focused on population. Statistics was a mature science by the early 20th century, though the field has advanced mightily since then. Gossett introduced the t-distribution in 1908. R.A. Fisher made several advances, including:
*Invented experimental design
*Invented experimental design
*Selected 5% as the standard “low level of significance”
*Selected 5% as the standard “low level of significance”
Line 68: Line 79:
*Invented formal statistical methods for analyzing experimental data
*Invented formal statistical methods for analyzing experimental data


More recent contributions have come from John Tukey<ref name="r1"/> (stem and leaf diagram, the terms “bit” and “software”) and Edward Tufte<ref name="r2"/> (visual presentation of statistics and data).  
More recent contributions have come from John Tukey<ref name="r1">_</ref> (stem and leaf diagram, the terms “bit” and “software”) and Edward Tufte<ref name="r2">_</ref> (visual presentation of statistics and data).


===Deterministic, analytical, and Monte Carlo models===
=== Deterministic, analytical, and Monte Carlo models ===
The roots of Monte Carlo simulation [the name of which was coined by researchers at Los Alamos National Laboratory (US)] were in theoretical statistics, but its applicability to a spectrum of practical problems accounts for its popularity. The term Monte Carlo, as applied to uncertainty analysis, was introduced by von Neumann, Metropolis, and Ulam at Los Alamos National Laboratory around 1940. Hertz published his classic article<ref name="r3"/> in 1964. A couple of years later, Paul Newendorp began teaching classes on “petroleum exploration economics and risk analysis,” out of which evolved the first edition of his text<ref name="r4"/> in 1975, the same year as McCray<ref name="r5"/> and two years before Megill<ref name="r6"/> wrote their books on the subject. Ten years later there was commercial software available to do Monte Carlo simulation.


To appreciate a Monte Carlo model, we must first discuss deterministic and analytical models. It now may seem natural to recognize the uncertainty implicit in so many of the variables we estimate, but the early models from engineering, physics, and mathematics were deterministic: all inputs—the so-called independent variables—and hence the outputs, or dependent variable(s), were fixed values. There was no uncertainty. Thus, any Excel worksheet with at least one cell containing a formula that references other cells in order to calculate a result is a deterministic model. The operative word was “calculate,” not “estimate.” We calculated the velocity of a falling object 5 seconds after it was propelled upward with (initial) velocity of 100 ft/sec at 46° from an initial position of 500 ft above the ground, ignoring air resistance (113 ft/sec at 322°, 347 ft downrange and 458 ft high). We calculated the time for light to travel from the sun to the Earth (8 minutes 19 seconds at the equinoxes). We used calculus to calculate the optimal order quantity that would minimize total cost—ordering plus storage plus stockout—for inventory models. We found the regression line that minimized the sum of squared residuals for a crossplot.  
The roots of Monte Carlo simulation [the name of which was coined by researchers at Los Alamos National Laboratory (US)] were in theoretical statistics, but its applicability to a spectrum of practical problems accounts for its popularity. The term Monte Carlo, as applied to uncertainty analysis, was introduced by von Neumann, Metropolis, and Ulam at Los Alamos National Laboratory around 1940. Hertz published his classic article<ref name="r3">_</ref> in 1964. A couple of years later, Paul Newendorp began teaching classes on “petroleum exploration economics and risk analysis,out of which evolved the first edition of his text<ref name="r4">_</ref> in 1975, the same year as McCray<ref name="r5">_</ref> and two years before Megill<ref name="r6">_</ref> wrote their books on the subject. Ten years later there was commercial software available to do Monte Carlo simulation.


Introducing uncertainty amounts to replacing one or more input values with a range of possible values, or more properly, a distribution. This leads us to two classes of models, the Monte Carlo models, which are a central topic on this page, and another class called analytical models, which we discuss briefly.  
To appreciate a Monte Carlo model, we must first discuss deterministic and analytical models. It now may seem natural to recognize the uncertainty implicit in so many of the variables we estimate, but the early models from engineering, physics, and mathematics were deterministic: all inputs—the so-called independent variables—and hence the outputs, or dependent variable(s), were fixed values. There was no uncertainty. Thus, any Excel worksheet with at least one cell containing a formula that references other cells in order to calculate a result is a deterministic model. The operative word was “calculate,” not “estimate.” We calculated the velocity of a falling object 5 seconds after it was propelled upward with (initial) velocity of 100 ft/sec at 46° from an initial position of 500 ft above the ground, ignoring air resistance (113 ft/sec at 322°, 347 ft downrange and 458 ft high). We calculated the time for light to travel from the sun to the Earth (8 minutes 19 seconds at the equinoxes). We used calculus to calculate the optimal order quantity that would minimize total cost—ordering plus storage plus stockout—for inventory models. We found the regression line that minimized the sum of squared residuals for a crossplot.


The analytical model can be thought of as lying between deterministic models and numerical simulation. In an analytical model, the inputs might be represented as probability distributions, and the outputs are also probability distributions. But, unlike a Monte Carlo simulation, we find the output by a formula. For instance, one can show that if we add two normal distributions having means 10 and 15 and standard deviations 5 and 4, respectively, and if these two inputs are independent, then the sum is a normal distribution with a mean of 25 and a standard deviation of √41. In general, for independent distributions, the sum of the means is the mean of the sum, and the sum of the variances is the variance of the sum. Things get complicated fast as our models get more complex algebraically, as we include dependence relationships and more exotic distribution types. Nonetheless, some work has been done combining probability distributions with formulas.<ref name="r7"/>  
Introducing uncertainty amounts to replacing one or more input values with a range of possible values, or more properly, a distribution. This leads us to two classes of models, the Monte Carlo models, which are a central topic on this page, and another class called analytical models, which we discuss briefly.
 
The analytical model can be thought of as lying between deterministic models and numerical simulation. In an analytical model, the inputs might be represented as probability distributions, and the outputs are also probability distributions. But, unlike a Monte Carlo simulation, we find the output by a formula. For instance, one can show that if we add two normal distributions having means 10 and 15 and standard deviations 5 and 4, respectively, and if these two inputs are independent, then the sum is a normal distribution with a mean of 25 and a standard deviation of √41. In general, for independent distributions, the sum of the means is the mean of the sum, and the sum of the variances is the variance of the sum. Things get complicated fast as our models get more complex algebraically, as we include dependence relationships and more exotic distribution types. Nonetheless, some work has been done combining probability distributions with formulas.<ref name="r7">_</ref>


Decision trees had their roots in business schools. They lie somewhere between deterministic and probabilistic models. They incorporate uncertainty in both estimates of the chance that something will happen and a range (more properly a list) of consequences. Thus, they are probabilistic. The solution, however, is a single number and a unique path to follow. Moreover, the sensitivity analysis for decision trees, which adds credibility to the model, is often ignored in papers and presentations and is quite limited in its scope compared to Monte Carlo simulation.
Decision trees had their roots in business schools. They lie somewhere between deterministic and probabilistic models. They incorporate uncertainty in both estimates of the chance that something will happen and a range (more properly a list) of consequences. Thus, they are probabilistic. The solution, however, is a single number and a unique path to follow. Moreover, the sensitivity analysis for decision trees, which adds credibility to the model, is often ignored in papers and presentations and is quite limited in its scope compared to Monte Carlo simulation.


===Early emphasis on reserves/later cost and value===
=== Early emphasis on reserves/later cost and value ===
Throughout the latter quarter of the 20th century, the oil/gas industry gradually adopted methods of uncertainty analysis, specifically decision trees and Monte Carlo simulation. A good indication of this change is the fact that the 60-page index of the 1,727-page, 1989 printing of the ''Petroleum Engineering Handbook''<ref name="r8"/> contained only one reference to “risk (factor)” in an article about property evaluation.  
 
Throughout the latter quarter of the 20th century, the oil/gas industry gradually adopted methods of uncertainty analysis, specifically decision trees and Monte Carlo simulation. A good indication of this change is the fact that the 60-page index of the 1,727-page, 1989 printing of the ''Petroleum Engineering Handbook''<ref name="r8">_</ref> contained only one reference to “risk (factor)” in an article about property evaluation.
 
Much of the early Monte Carlo simulation and decision tree work in the oil/gas industry focused on estimating reserves and resources. Industry courses sponsored by the American Association of Petroleum Geologists (AAPG) and Society of Petroleum Engineers (SPE) often emphasized exploration. Oddly, cost models and production forecasting were often given short shrift or treated trivially. By the early 1990s, however, while Wall Street was hyping hedges and both companies and individuals were wondering about optimizing their portfolios, several companies began marketing probabilistic cash flow models for the petroleum industry.
 
In the mid- to late 1990s, people began to build probabilistic models for prices of oil/gas rather than simply assume three simplistic deterministic forecasts (base, optimistic, and pessimistic). The half dozen or so competing cash flow models in the petroleum industry began including some form of uncertainty analysis as optional features in their software.


Much of the early Monte Carlo simulation and decision tree work in the oil/gas industry focused on estimating reserves and resources. Industry courses sponsored by the American Association of Petroleum Geologists (AAPG) and Society of Petroleum Engineers (SPE) often emphasized exploration. Oddly, cost models and production forecasting were often given short shrift or treated trivially. By the early 1990s, however, while Wall Street was hyping hedges and both companies and individuals were wondering about optimizing their portfolios, several companies began marketing probabilistic cash flow models for the petroleum industry.  
During the late 1990s, SPE began an intensive dialog about probabilistic reserves definitions. Finally, by 2000, pioneers were promoting portfolio optimization and real options, both of which acknowledge volatility of prices.


In the mid- to late 1990s, people began to build probabilistic models for prices of oil/gas rather than simply assume three simplistic deterministic forecasts (base, optimistic, and pessimistic). The half dozen or so competing cash flow models in the petroleum industry began including some form of uncertainty analysis as optional features in their software.
== References ==


During the late 1990s, SPE began an intensive dialog about probabilistic reserves definitions. Finally, by 2000, pioneers were promoting portfolio optimization and real options, both of which acknowledge volatility of prices.
<references />


==References==
== General references ==
<references>
<ref name="r1">Tukey, J.W. 1977. ''Exploratory Data Analysis''. Boston, Massachusetts: Addison-Wesley.</ref>
<ref name="r2">Tufte, E.R. 1983. ''The Visual Display of Quantitative Information'', second edition. Chesire, Connecticut: Graphics Press.</ref>
<ref name="r3">Hertz, D.B. 1964. Risk Analysis in Capital Investments. ''Harvard Business Review'' '''95''' (1).</ref>
<ref name="r4">Newendorp, P. 1975. ''Decision Analysis for Petroleum Exploration''. Tulsa, Oklahoma: PennWell Corp.</ref>
<ref name="r5">McCray, A.W. 1975. ''Petroleum Evaluations and Economic Decisions''. Englewood Cliffs, New Jersey: Prentice-Hall Inc.</ref>
<ref name="r6">Megill, R.E. 1977. ''An Introduction to Risk Analysis''. Tulsa, Oklahoma: Petroleum Publishing Co.</ref>
<ref name="r7">Garvey, P.R. 1999. ''Probability Methods for Cost Uncertainty Analysis''. New York City: Marcel Decker.</ref>
<ref name="r8">Bradley, H.B. ed. 1987. ''Petroleum Engineering Handbook'', 41-43. Richardson, Texas: SPE.</ref>
</references>


==General references==
Smith, M.B. 1968. Estimate Reserves by Using Computer Simulation Method. ''Oil & Gas Journal'' (March): 81.
Smith, M.B. 1968. Estimate Reserves by Using Computer Simulation Method. ''Oil & Gas Journal'' (March): 81.


Line 113: Line 119:
Megill, R.E. ''Evaluating & Managing Risk— A Collection of Readings''. Tulsa, Oklahoma: SciData Publishing.
Megill, R.E. ''Evaluating & Managing Risk— A Collection of Readings''. Tulsa, Oklahoma: SciData Publishing.


==Noteworthy papers in OnePetro==
== Noteworthy papers in OnePetro ==
Walstrom, J.E., Mueller, T.D., and McFarlane, R.C. 1967. Evaluating Uncertainty in Engineering Calculations. ''J Pet Technol'' '''19''' (12): 1595-1603. http://dx.doi.org/10.2118/1928-PA
 
Walstrom, J.E., Mueller, T.D., and McFarlane, R.C. 1967. Evaluating Uncertainty in Engineering Calculations. ''J Pet Technol'' '''19''' (12): 1595-1603. [http://dx.doi.org/10.2118/1928-PA http://dx.doi.org/10.2118/1928-PA]
 
Smith, M.B. 1970. Probability Models for Petroleum Investment Decisions. ''J Pet Technol'' '''22''' (5): 543-550. [http://dx.doi.org/10.2118/2587-PA http://dx.doi.org/10.2118/2587-PA]
 
Smith, M.B. 1974. Probability Estimates for Petroleum Drilling Decisions. ''J Pet Technol'' '''26''' (6): 687-695. SPE-4617-PA. [http://dx.doi.org/10.2118/4617-PA http://dx.doi.org/10.2118/4617-PA]
 
Behrenbruch, P., Azinger, K.L., and Foley, M.V. 1989. Uncertainty and Risk in Petroleum Exploration and Development: The Expectation Curve Method. Presented at the SPE Asia-Pacific Conference, Sydney, Australia, 13-15 September 1989. SPE-19475-MS. [http://dx.doi.org/10.2118/19475-MS http://dx.doi.org/10.2118/19475-MS]
 
Chewaroungroaj, J., Varela, O.J., and Lake, L.W. 2000. An Evaluation of Procedures to Estimate Uncertainty in Hydrocarbon Recovery Predictions. Presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Yokohama, Japan, 25-26 April 2000. SPE-59449-MS. [http://dx.doi.org/10.2118/59449-MS http://dx.doi.org/10.2118/59449-MS]


Smith, M.B. 1970. Probability Models for Petroleum Investment Decisions. ''J Pet Technol'' '''22''' (5): 543-550. http://dx.doi.org/10.2118/2587-PA
Galli, A., Armstrong, M., and Jehl, B. 1999. Comparing Three Methods for Evaluating Oil Projects: Option Pricing, Decision Trees, and Monte Carlo Simulations. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52949-MS. [http://dx.doi.org/10.2118/52949-MS http://dx.doi.org/10.2118/52949-MS]


Smith, M.B. 1974. Probability Estimates for Petroleum Drilling Decisions. ''J Pet Technol'' '''26''' (6): 687-695. SPE-4617-PA. http://dx.doi.org/10.2118/4617-PA
Maharaj, U.S. 1996. Risk Analysis Of Tarsands Exploitation Projects in Trinidad. Presented at the SPE Latin America/Caribbean Petroleum Engineering Conference, Port-of-Spain, Trinidad, 23-26 April 1996. SPE-36124-MS. [http://dx.doi.org/10.2118/36124-MS http://dx.doi.org/10.2118/36124-MS]


Behrenbruch, P., Azinger, K.L., and Foley, M.V. 1989. Uncertainty and Risk in Petroleum Exploration and Development: The Expectation Curve Method. Presented at the SPE Asia-Pacific Conference, Sydney, Australia, 13-15 September 1989. SPE-19475-MS. http://dx.doi.org/10.2118/19475-MS
Martinsen, R., Kjelstadli, R.M., Ross, C. et al. 1997. The Valhall Waterflood Evaluation: A Decision Analysis Case Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38926-MS. [http://dx.doi.org/10.2118/38926-MS http://dx.doi.org/10.2118/38926-MS]


Chewaroungroaj, J., Varela, O.J., and  Lake, L.W. 2000. An Evaluation of Procedures to Estimate Uncertainty in Hydrocarbon Recovery Predictions. Presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Yokohama, Japan, 25-26 April 2000. SPE-59449-MS. http://dx.doi.org/10.2118/59449-MS  
Purvis, D.C., Strickland, R.F., Alexander, R.A. et al. 1997. Coupling Probabilistic Methods and Finite Difference Simulation: Three Case Histories. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38777-MS. [http://dx.doi.org/10.2118/38777-MS http://dx.doi.org/10.2118/38777-MS]


Galli, A., Armstrong, M., and Jehl, B. 1999. Comparing Three Methods for Evaluating Oil Projects: Option Pricing, Decision Trees, and Monte Carlo Simulations. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52949-MS. http://dx.doi.org/10.2118/52949-MS
Zhang, D., Li, L., and Tchelepi, H.A. 1999. Stochastic Formulation for Uncertainty Assessment of Two-Phase Flow in Heterogeneous Reservoirs. Presented at the SPE Reservoir Simulation Symposium, Houston, Texas, 14-17 February 1999. SPE-51930-MS. [http://dx.doi.org/10.2118/51930-MS http://dx.doi.org/10.2118/51930-MS]


Maharaj, U.S. 1996. Risk Analysis Of Tarsands Exploitation Projects in Trinidad. Presented at the SPE Latin America/Caribbean Petroleum Engineering Conference, Port-of-Spain, Trinidad, 23-26 April 1996. SPE-36124-MS. http://dx.doi.org/10.2118/36124-MS
== Online multimedia ==


Martinsen, R., Kjelstadli, R.M., Ross, C. et al. 1997. The Valhall Waterflood Evaluation: A Decision Analysis Case Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38926-MS. http://dx.doi.org/10.2118/38926-MS
Schulz, Rodney. 2013. Oil and Gas Economics and Uncertainty. [http://eo2.commpartners.com/users/spe/session.php?id=11883 http://eo2.commpartners.com/users/spe/session.php?id=11883]


Purvis, D.C., Strickland, R.F., Alexander, R.A. et al. 1997. Coupling Probabilistic Methods and Finite Difference Simulation: Three Case Histories. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38777-MS. http://dx.doi.org/10.2118/38777-MS
== External links ==


Zhang, D., Li, L., and Tchelepi, H.A. 1999. Stochastic Formulation for Uncertainty Assessment of Two-Phase Flow in Heterogeneous Reservoirs. Presented at the SPE Reservoir Simulation Symposium, Houston, Texas, 14-17 February 1999. SPE-51930-MS. http://dx.doi.org/10.2118/51930-MS
Bratvold, R. and Begg, S. 2010. ''Making Good Decisions''. Richardson, Texas: Society of Petroleum Engineers. [http://store.spe.org/Making-Good-Decisions-P413.aspx http://store.spe.org/Making-Good-Decisions-P413.aspx]


==Online multimedia==
Newendorp, P. 1975. ''Decision Analysis for Petroleum Exploration.'' Tulsa, Oklahoma: PennWell. [http://www.pennwellbooks.com/deanforpeex2.html http://www.pennwellbooks.com/deanforpeex2.html]
Schulz, Rodney. 2013. Oil and Gas Economics and Uncertainty. http://eo2.commpartners.com/users/spe/session.php?id=11883


==External links==
== See also ==
Bratvold, R. and Begg, S. 2010.  ''Making Good Decisions''. Richardson, Texas: Society of Petroleum Engineers. http://store.spe.org/Making-Good-Decisions-P413.aspx


Newendorp, P. 1975. ''Decision Analysis for Petroleum Exploration.'' Tulsa, Oklahoma: PennWell. http://www.pennwellbooks.com/deanforpeex2.html
[[Statistical_concepts_in_risk_analysis|Statistical concepts in risk analysis]]


==See also==
[[Decision_tree_analysis|Decision tree analysis]]
[[Statistical concepts in risk analysis]]


[[Decision tree analysis]]
[[Monte_Carlo_simulation|Monte Carlo simulation]]


[[Monte Carlo simulation]]
[[Decision_analysis:_additional_tools]]


[[Decision analysis: additional tools]]
[[Application_of_risk_and_decision_analysis|Application of risk and decision analysis]]


[[Application of risk and decision analysis]]
[[Cost_and_time_estimates|Cost and time estimates]]


[[Cost and time estimates]]
[[Resources_and_reserves_models|Resources and reserves models]]


[[Resources and reserves models]]
[[Production_forecasts|Production forecasts]]


[[Production forecasts]]
[[Problems_with_deterministic_models|Problems with deterministic models]]


[[Problems with deterministic models]]
[[Challenges_with_probabilistic_models|Challenges with probabilistic models]]


[[Challenges with probabilistic models]]
[[Design_of_uncertainty_models|Design of uncertainty models]]


[[Design of uncertainty models]]
[[PEH:Risk_and_Decision_Analysis]]


[[PEH:Risk and Decision Analysis]]
[[Category:7.2.1 Risk, uncertainty, and risk assessment]]

Revision as of 19:02, 11 June 2015

The oil and gas industry invests significant money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.

Risk and decision analysis software is as diverse as the analysis methods themselves. There are programs to do Monte Carlo simulation and decision tree analysis. Analytic models to do economics can be linked to both Monte Carlo simulation and decision trees. Closely related are optimization, sensitivity analysis, and influence diagrams. Extending further, we encounter forecasting, expert systems, and fuzzy logic. Within geoscientists’ purview are mapping packages and geostatistics software, both of which have the potential to offer strong support to the analysis of uncertainty.

Language of risk analysis and decision making

Any description of Monte Carlo simulation and decision trees must devote some time to the underpinnings of statistics and probability. Undergraduate engineering programs sometimes include one course in statistics, and graduate programs often require one. Unfortunately, what engineers take away from those classes does not always prepare them to deal with uncertainty analysis. For whatever reason, engineers do not gain a level of comfort with the language nor see immediate use for it in their jobs.

Statistical concepts in risk analysis introduces the concepts of:

  • Central tendency (mean, mode, and median)
  • Dispersion (standard deviation, ranges, and confidence intervals)
  • Skewness
  • The graphical tools (histograms, density functions, and cumulative distributions) necessary to communicate ideas of uncertainty about a single variable

Correlation and regression, especially the former, serve to describe the relationship between two parameters. We use Excel to illustrate these descriptive statistics.

This section clarifies what it means to fit historical data. The premise is that we usually have a small sample taken from a huge population, which we wish to describe. The process begins by constructing a histogram from the data and then seeking a density function that resembles the histogram. This statistical tool contrasts sharply with the well-known linear regression, in spite of the fact that their metrics to judge the goodness of fit appear similar.

Three common distribution types—normal, log-normal, and binomial—are discussed at length to assist users in choosing an appropriate type when building a model. The central limit theorem establishes guidelines about sums and products of distributions. A cousin of statistics, probability theory, paves the way to introduce Bayes’ theorem, which is invoked in prospect evaluation to ensure consistent logic for revising probabilities.

The tools of the trade

Monte Carlo simulation and decision trees are defined and illustrated, compared and contrasted. Some problems yield to one or the other of these tools. Occasionally, both methods can serve a useful purpose. Decision trees are visual. Their impact diminishes as the model becomes larger and more complex. Decision trees rely on expected value, but decision makers do not always do the same, which brings about the notion of utility functions. Decision trees have their unique form of sensitivity analysis, limited to tweaking one or two variables at a time. Solutions to decision trees consist of a recommended path or choice of action and an associated expected value.

Monte Carlo models do not result in a recommended course of action. Rather they make estimates, providing ranges rather than single values like deterministic models. Their scope is broad, ranging from simple estimates of oil and/or gas reserves with volumetric formulas to full-scale field development. These models and the subsequent analysis and presentation show the wide range of possible outcomes and the probability of each.

Additional tools such as optimization and options may also be useful.

Among the issues raised by practitioners of risk analysis are “Why should we be doing this?” and “Now that we are doing it, are we doing it right?” Both of these questions are addressed by identifying pitfalls of deterministic models (to see why we should migrate toward probabilistic methods) and pitfalls of probabilistic models (to see how we might go astray here).

Typical applications of technologies

Monte Carlo simulation models include:

Design of uncertainty models

A proper start in risk analysis requires investing time in the design of a model. Design of uncertainty models steps through the principal components of a Monte Carlo model:

  • Explicit equations and assumptions
  • A list of key input distributions
  • Sensible selection of outputs (not too many, not too few)
  • Using correlation among inputs
  • Early screening of key variables through sensitivity analysis
  • Laying the groundwork for an effective presentation

History of risk analyses within oil/gas industry

Uncertainty analysis evolved during the latter half of the 20th century. Its underpinnings in statistics and probability were in place by 1900. Problem solving, especially in industrial engineering and operations research, was introduced in midcentury, following more theoretical modeling in physics, chemistry, and mathematics in the early 1900s. The computer revolution, and in particular the availability of desktop computers and spreadsheet programs in the 1980s and 1990s, supplied the final ingredient.

Of course, there had to be motivation and hard problems to solve. Oil/gas companies became more technical, and competition for funds demanded analysis of profitability. Numerical simulation methods such as reservoir and geostatistical models became established tools, making it easier to argue for Monte Carlo and decision tree tools.

Origins

Risk analysis did not simply spring forth in full bloom in the mid-20th century. Among its progenitors were the 17th- and 18th- century origins of probability theory in the context of:

  • Games of chance, probability, and statistics formalism from the late 19th century
  • The problem-solving and modeling interests that led to operations research, industrial engineering, and general applied mathematics
  • The more technical side of business and economics

Although some notable contributions to probability and statistics appeared much earlier (Cardano, Galileo, Gauss, Fermat, the Bernoulis, De Moivre, Bayes), it was not until the end of the 19th century that statistics became formalized with pioneers like:

  • Galton (percentiles, eugenics)
  • Pearson (chi-square test, standard deviation, skewness, correlation)
  • Spearman (rank correlation, applications in social sciences)

The Royal Statistical Society was founded in 1834, The American Statistical Association in 1839, the Statistics Sweden in 1858, and La Société de Statistique de Paris (SSP) in 1860.

During the early and mid-19th century, statistics focused on population. Statistics was a mature science by the early 20th century, though the field has advanced mightily since then. Gossett introduced the t-distribution in 1908. R.A. Fisher made several advances, including:

  • Invented experimental design
  • Selected 5% as the standard “low level of significance”
  • Introduced terms such as “parameter” and “statistic” to the literature
  • Solved problems in distribution theory that were blocking further progress
  • Invented formal statistical methods for analyzing experimental data

More recent contributions have come from John Tukey[1] (stem and leaf diagram, the terms “bit” and “software”) and Edward Tufte[2] (visual presentation of statistics and data).

Deterministic, analytical, and Monte Carlo models

The roots of Monte Carlo simulation [the name of which was coined by researchers at Los Alamos National Laboratory (US)] were in theoretical statistics, but its applicability to a spectrum of practical problems accounts for its popularity. The term Monte Carlo, as applied to uncertainty analysis, was introduced by von Neumann, Metropolis, and Ulam at Los Alamos National Laboratory around 1940. Hertz published his classic article[3] in 1964. A couple of years later, Paul Newendorp began teaching classes on “petroleum exploration economics and risk analysis,” out of which evolved the first edition of his text[4] in 1975, the same year as McCray[5] and two years before Megill[6] wrote their books on the subject. Ten years later there was commercial software available to do Monte Carlo simulation.

To appreciate a Monte Carlo model, we must first discuss deterministic and analytical models. It now may seem natural to recognize the uncertainty implicit in so many of the variables we estimate, but the early models from engineering, physics, and mathematics were deterministic: all inputs—the so-called independent variables—and hence the outputs, or dependent variable(s), were fixed values. There was no uncertainty. Thus, any Excel worksheet with at least one cell containing a formula that references other cells in order to calculate a result is a deterministic model. The operative word was “calculate,” not “estimate.” We calculated the velocity of a falling object 5 seconds after it was propelled upward with (initial) velocity of 100 ft/sec at 46° from an initial position of 500 ft above the ground, ignoring air resistance (113 ft/sec at 322°, 347 ft downrange and 458 ft high). We calculated the time for light to travel from the sun to the Earth (8 minutes 19 seconds at the equinoxes). We used calculus to calculate the optimal order quantity that would minimize total cost—ordering plus storage plus stockout—for inventory models. We found the regression line that minimized the sum of squared residuals for a crossplot.

Introducing uncertainty amounts to replacing one or more input values with a range of possible values, or more properly, a distribution. This leads us to two classes of models, the Monte Carlo models, which are a central topic on this page, and another class called analytical models, which we discuss briefly.

The analytical model can be thought of as lying between deterministic models and numerical simulation. In an analytical model, the inputs might be represented as probability distributions, and the outputs are also probability distributions. But, unlike a Monte Carlo simulation, we find the output by a formula. For instance, one can show that if we add two normal distributions having means 10 and 15 and standard deviations 5 and 4, respectively, and if these two inputs are independent, then the sum is a normal distribution with a mean of 25 and a standard deviation of √41. In general, for independent distributions, the sum of the means is the mean of the sum, and the sum of the variances is the variance of the sum. Things get complicated fast as our models get more complex algebraically, as we include dependence relationships and more exotic distribution types. Nonetheless, some work has been done combining probability distributions with formulas.[7]

Decision trees had their roots in business schools. They lie somewhere between deterministic and probabilistic models. They incorporate uncertainty in both estimates of the chance that something will happen and a range (more properly a list) of consequences. Thus, they are probabilistic. The solution, however, is a single number and a unique path to follow. Moreover, the sensitivity analysis for decision trees, which adds credibility to the model, is often ignored in papers and presentations and is quite limited in its scope compared to Monte Carlo simulation.

Early emphasis on reserves/later cost and value

Throughout the latter quarter of the 20th century, the oil/gas industry gradually adopted methods of uncertainty analysis, specifically decision trees and Monte Carlo simulation. A good indication of this change is the fact that the 60-page index of the 1,727-page, 1989 printing of the Petroleum Engineering Handbook[8] contained only one reference to “risk (factor)” in an article about property evaluation.

Much of the early Monte Carlo simulation and decision tree work in the oil/gas industry focused on estimating reserves and resources. Industry courses sponsored by the American Association of Petroleum Geologists (AAPG) and Society of Petroleum Engineers (SPE) often emphasized exploration. Oddly, cost models and production forecasting were often given short shrift or treated trivially. By the early 1990s, however, while Wall Street was hyping hedges and both companies and individuals were wondering about optimizing their portfolios, several companies began marketing probabilistic cash flow models for the petroleum industry.

In the mid- to late 1990s, people began to build probabilistic models for prices of oil/gas rather than simply assume three simplistic deterministic forecasts (base, optimistic, and pessimistic). The half dozen or so competing cash flow models in the petroleum industry began including some form of uncertainty analysis as optional features in their software.

During the late 1990s, SPE began an intensive dialog about probabilistic reserves definitions. Finally, by 2000, pioneers were promoting portfolio optimization and real options, both of which acknowledge volatility of prices.

References

  1. _
  2. _
  3. _
  4. _
  5. _
  6. _
  7. _
  8. _

General references

Smith, M.B. 1968. Estimate Reserves by Using Computer Simulation Method. Oil & Gas Journal (March): 81.

Hertz, D.B. 1964. Risk Analysis in Capital Investments. Harvard Buisness Review 95 (1): 95–106.

Higgins, J.G. 1993. Planning for Risk and Uncertainty in Oil Exploration. Long Range Planning 26 (1): 111–122.

Megill, R.E. 1977. An Introduction to Risk Analysis. Tulsa, Oklahoma: Petroleum Publishing Company.

Megill, R.E. Evaluating & Managing Risk— A Collection of Readings. Tulsa, Oklahoma: SciData Publishing.

Noteworthy papers in OnePetro

Walstrom, J.E., Mueller, T.D., and McFarlane, R.C. 1967. Evaluating Uncertainty in Engineering Calculations. J Pet Technol 19 (12): 1595-1603. http://dx.doi.org/10.2118/1928-PA

Smith, M.B. 1970. Probability Models for Petroleum Investment Decisions. J Pet Technol 22 (5): 543-550. http://dx.doi.org/10.2118/2587-PA

Smith, M.B. 1974. Probability Estimates for Petroleum Drilling Decisions. J Pet Technol 26 (6): 687-695. SPE-4617-PA. http://dx.doi.org/10.2118/4617-PA

Behrenbruch, P., Azinger, K.L., and Foley, M.V. 1989. Uncertainty and Risk in Petroleum Exploration and Development: The Expectation Curve Method. Presented at the SPE Asia-Pacific Conference, Sydney, Australia, 13-15 September 1989. SPE-19475-MS. http://dx.doi.org/10.2118/19475-MS

Chewaroungroaj, J., Varela, O.J., and Lake, L.W. 2000. An Evaluation of Procedures to Estimate Uncertainty in Hydrocarbon Recovery Predictions. Presented at the SPE Asia Pacific Conference on Integrated Modelling for Asset Management, Yokohama, Japan, 25-26 April 2000. SPE-59449-MS. http://dx.doi.org/10.2118/59449-MS

Galli, A., Armstrong, M., and Jehl, B. 1999. Comparing Three Methods for Evaluating Oil Projects: Option Pricing, Decision Trees, and Monte Carlo Simulations. Presented at the SPE Hydrocarbon Economics and Evaluation Symposium, Dallas, Texas, 21-23 March 1999. SPE-52949-MS. http://dx.doi.org/10.2118/52949-MS

Maharaj, U.S. 1996. Risk Analysis Of Tarsands Exploitation Projects in Trinidad. Presented at the SPE Latin America/Caribbean Petroleum Engineering Conference, Port-of-Spain, Trinidad, 23-26 April 1996. SPE-36124-MS. http://dx.doi.org/10.2118/36124-MS

Martinsen, R., Kjelstadli, R.M., Ross, C. et al. 1997. The Valhall Waterflood Evaluation: A Decision Analysis Case Study. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38926-MS. http://dx.doi.org/10.2118/38926-MS

Purvis, D.C., Strickland, R.F., Alexander, R.A. et al. 1997. Coupling Probabilistic Methods and Finite Difference Simulation: Three Case Histories. Presented at the SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5-8 October 1997. SPE-38777-MS. http://dx.doi.org/10.2118/38777-MS

Zhang, D., Li, L., and Tchelepi, H.A. 1999. Stochastic Formulation for Uncertainty Assessment of Two-Phase Flow in Heterogeneous Reservoirs. Presented at the SPE Reservoir Simulation Symposium, Houston, Texas, 14-17 February 1999. SPE-51930-MS. http://dx.doi.org/10.2118/51930-MS

Online multimedia

Schulz, Rodney. 2013. Oil and Gas Economics and Uncertainty. http://eo2.commpartners.com/users/spe/session.php?id=11883

External links

Bratvold, R. and Begg, S. 2010. Making Good Decisions. Richardson, Texas: Society of Petroleum Engineers. http://store.spe.org/Making-Good-Decisions-P413.aspx

Newendorp, P. 1975. Decision Analysis for Petroleum Exploration. Tulsa, Oklahoma: PennWell. http://www.pennwellbooks.com/deanforpeex2.html

See also

Statistical concepts in risk analysis

Decision tree analysis

Monte Carlo simulation

Decision_analysis:_additional_tools

Application of risk and decision analysis

Cost and time estimates

Resources and reserves models

Production forecasts

Problems with deterministic models

Challenges with probabilistic models

Design of uncertainty models

PEH:Risk_and_Decision_Analysis