Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. More information
Look backs and forecast verification
“It would be an understatement to say that reservoir performance forecasting is not an exact science.” (Saleri 1992).
Historically and currently the oil industry has delivered only about 75% of what was forecasted. (Nandurdikar 2011, Franzen 1980) This chapter looks at ways to try and reduce this mismatch.
There are three paradoxes we struggle with when forecasting oil rates or reserves, these will be discussed in the initial portion of the chapter. Next, we then quote the typical error ranges of forecasts in the literature. However there is a shortage of published data on error ranges. Following this, the chapter summarises a look-back procedure that will improve the next generation of forecasts when dynamic or new data comes in.
In addition to technical factors internal to the reservoir or production system, external factors such as project delays play a role so this will be discussed as well.
It is increasingly obvious that all human forecasting struggles with improving accuracy of the forecast. Therefore we will also discuss typical bias or psychological factors in forecasts because they affect many forecasts not just in the petroleum area.
Paradox 1 Overestimating initial performance and underestimating long term performance
There is a paradox that should be understood in trying to forecast oil and gas production. The paradox in general is that we often overestimate initial recovery and rates of recovery but underestimate long term reserves. In general reservoir heterogeneity and an underestimation of complexity cause an overestimation of initial recovery. External factors such as project equipment delays and/or operational factors also contribute to over estimation of rates of recovery and underestimation of project time.
However, technology and a better understanding of reservoir complexity and reservoir flow allow individual reservoir’s recovery to increase with time. This common phenomenon is generally called reserve growth and seen in 1000’s of reservoirs. Antansi 1994, Baker 2004, Verma 2001, Beliveau 2003, Lore 1997, Klett 2003, Caldwell +Heather 1997 etc. Ironically technology and knowledge of reservoir heterogeneity allows us to grow reserves later in the life of the field. Reserve growth is most likely in marginal reservoirs that are “permeability or mobility challenged”; low permeability gas and oil fields as well as heavy oil fields. , Baker 2004, Caldwell +Heather 1997 Smaller reservoirs also greatly benefit from technology. In these mobility or permeability challenged reservoirs pay cut off criteria has been too high or conservative. Technology has allowed better targeting, Enhanced Oil Recovery and better well contact with reservoir. Technology has allowed us to target poorer quality resources as shown in Fig 1.
Figure 1 – Resource Pyramid and development of Technology (Pending permission approval)
An example of how technology has enabled higher oil reserves is from the North Sea:
“As an example, in the British North Sea, many small fields that were discovered in the early days of exploration were not produced because they were not economic. Later, however, as other discoveries and other infrastructure were put in place nearby and as new technologies such as subsea templates were developed, it became much cheaper to develop small fields and connect them to nearby pipelines or platforms. Thus, in 1995, approximately 16% of production in the UK came from fields that had been discovered before 1980, but not put into production until after 1990.” Lynch 2002
Paradox 2: Knowing the reservoir is complex but starting with simple models
There is a second paradox associated with forecasting and more specifically reservoir characterization. We know a priori that the models we construct initially are generally too simple. So initially our models are too optimistic or have some phenomena we are neglecting. Therefore, our look back usually focuses on things we are missing or adding complexity. But we don’t know what the components of those complexities are so we use simple (non-complex) tools as range finding tools first and tell us generally the key controlling factors of the performance of the reservoir first. This allows us to refine our answer as we know more and more about the reservoir. Then we can gradually add complexity to the model, following Occam’s razor.
The phenomena is analogous to building a puzzle. Initially when we examine a piece of the puzzle we generally look at overall size and color or complexity of the puzzle piece. We don’t need exact measurements of any of those parameters until we actually fit the piece in to the puzzle itself.
Therefore the paradox of reservoir characterization is that we start with simple tools to learn what the controlling factors are but we know we will end up probably using a more complex model to explain the overall performance. In other words, the reservoir and the associated production system
“is more complicated than we think, but we need simple tools to help us and guide us in realizing which part complicates the problem and what is the limiting variable.” Rous 2015
Paradox 3: Lack of improvement in forecasting accuracy despite better technology
The third paradox is that our forecasts have not improved despite better reservoir description and much better computation power over the last twenty to forty years. Saleri 1992 discusses this phenomena;
"there is a paradox in forecasting; a more detailed reservoir description and rigorous realizations do not necessarily translate to more accurate forecasts. Our ability to produce realistic geostatistical capabilities has not been matched by computational capabilities to conduct flow simulation." Nandurdikar 2011
Also discusses the differences between higher expectations for forecast due to better and faster computer output and programs but yet poorer or constant discrepancies (error bars). We believe the reason for this is that many “Black Swan” factors are not visible in the initial static data. We will discuss the reasons for this phenomenon is in the section on external factors. This paradox is seen in other forecast fields as well (when things bite back)
Typical Error Ranges
As stated earlier, we should not expect initial models to provide precise answers on reservoirs or fields with no calibration of the geological (static) model and the dynamic model. We only sample approximately one ten billionth of the reservoir with core and logs. There are many examples of forecasts that do not match the actual field data (Corrigan 1988, Castle 1986, Franzen 1980, Saleri 1991, Nandurdikar 2011).
The initial forecasts on a field basis will be approximately +/-25% with a well-defined process or drive mechanism. There is a strong tendency to overestimate rates and oil rates within the first five years. The forecast can have larger error bars +/-60%, if the process is ill-defined or drive mechanism and direction of flow is unclear for a particular reservoir (thermal EOR, chemical EOR or miscible gas injection). Reservoir architecture plays an important role. The forecast can also have larger error bars +/-60% if either the structural model (faults/fractures) or the permeability/porosity model has large variance.
Nandurdikar 2011 examined 59 offshore fields and found that projects are delivering oil rates that are approximately 75% of the initial forecast. Franzen found for off shore Gulf of Mexico projects a 23% error in initial oil rates. This error range is similar to Dake’s analysis.
In roughly 30-50% of the projects that don’t reach their target, the reason given by the operator is poor reservoir performance. Nandurdikar 2011 Our own experience and other is that failure of waterfloods and EOR due to reservoir heterogeneity is 50-60%. Jackson 1968 Early breakthrough of injected fluids, especially those that have high mobility ratios, are extremely hard to predict because high permeability thief zones or induced fractures are only a small fraction of the reservoir (<1-10%) yet can control a large portion of flow. Nandurdikar 2011 shows that for offshore fields with lower API gravity and higher oil viscosity crude the quality of the forecast was degraded.
Early breakthrough of injected fluids or early breakthrough of gas or water to producer depends on high permeability and high continuity pathways for gas or water. This is controlled by the extremes of the permeability data. Unfortunately we cannot easily see how continuous the high permeability zones are and the direction they are located. It is a classic case of trying to extrapolate the extremes of data. Nassim Often coring and logging can do a reasonable job of sampling the average variables but data such as high connectivity of high permeability regions or high connectivity of extreme permeability cannot be directly tested at the lab scale. Early gas or water rates may cause higher gas and water breakthrough and a need for increased fluid handling and longer project times. So it is my opinion that we can state that it is likely early breakthrough will occur but we can’t predict a priori the exact wells and timing of breakthrough unless we have offsetting fields with breakthrough data that have similar geological features.
Estimates of external project timing delays were in the order of 20% for Gulf of Mexico offshore projects. Considering the thousands or 10,000’s of forecasts done every year, there is a shortage of published data on error ranges; actual production data vs forecasted. We believe that the industry we greatly benefit from these studies.
With more production data and dynamic data we will converge to less error tolerance in the ultimate recovery and ultimate recovery factor Thompson 1989. However the remaining reserves in a particular field will still have wide error bars because of the interaction of economics, equipment failure and the technical factors. Often wells and fields are not shut-in at economic limits rather there are abandoned when equipment or pipelines fail or operating expense are too high. These chaotic events are difficult to exactly pick. Economics and oil price also cause termination and abandonment of fields or wells. Therefore uncertainty in economics (oil price, costs) and equipment failure causes uncertainty in remaining reserves in marginal fields.
A look-back or audit as defined here is a comparison of model forecast vs. actual data for a reservoir or a field. The assumption in this chapter is that we have an existing model (simulation or analytical) in place.
There is a large amount of uncertainty with any un-calibrated model and therefore look-back studies can be used to confirm the validity and applicability of the model. The uncertainty is mainly due to lack of direct sampling of the reservoir and also we are uncertain which phenomena affect our field and well performance. Companies and individuals should compare forecasts with actual field production data on a regular basis. This provides a regular feedback loop that enables forecasters to learn from their mistakes/missing data and can lead to improved future forecasts.
We should not expect initial models to provide precise answers on reservoirs or fields with no calibration of the geological (static) model, analogy or dynamic data. But we should expect that our forecasts to become more accurate with time and iterations.
When doing a look back on a forecast or model we first need to develop an independent view of the reservoir and well performance. Otherwise, large anchoring effects can occur in which we accept immature assumptions that are implicit in the model and perpetuate the errors.
The look back workflow somewhat depends on the availability of data. In the workflow it is assumed that there is production data (oil, gas + water) in the range of six months to 2-3 years as well as some static pressure measurements initially and after some production started.
A forecast has three components
Subsurface and well lift issues (reservoir, reservoir characterization, well lifting and completion)
Facility performance (uptime), facility constraints, activity scheduling, customer demand, Force Majeure events.
Human factors which can be inherent in both internal and external factors.
In terms of determining whether the reservoir description is correctly represented or affecting the accuracy of a forecast, a number of diagnostic techniques are applied:
- Estimate the OOIP volumetrically as well as if possible by material balance.
- From initial pressure and initial steady state or pseudo steady state rates calculate overall average permeability on initial spacing
- Calculate large scale permeability for each well using inflow performance data
- Examine core permeability
- Apply correction factors (Klinkenberg, stress and initial water saturation effects on air to liquid permeability) to core permeability
- Compare permeability derived from core data, inflow performance and/or pressure transient data
- Calculate large scale permeability to small scale permeability ratio (Keffective large scale/Kcore small scale)\
- Calculate corrected is ratio less than, or greater than one can we explain the differences due to cross bedding, induced fracturing or faulting or limited core sampling
- Once we have a partial understanding of drive mechanism and reservoir flows as well as OOIP see if material balance can help us converge to yield more information on aquifers or gas cap (outer boundary effects)
- Examine spatial variant and time variant behavior; look for general reservoir fluid movement trends. Also compare those production trends with geological maps of thickness, top of structure, etc.
- Examine the time variant behavior points; look for decline rate and type as well as and breakthrough time determine drive mechanism, reservoir architecture, completion efficiency etc. (inflection), try and tie those profiles to drive mechanism, distances to water oil or gas oil contacts
- When new wells come on examine if well interference occcurs
- We usually identify completion problems by comparison with other wells and/or pressure buildup tests separating skin effects from far field permeability.
- To check for interference we compare decline rates before and after new wells are brought on.
External factors need to be considered in many forecasts. Facility issues cause delay in 25-33% of time. Nandurdikar 2011, Franzen 1980 Forecasts some time incorrectly assume;
- Flawless execution
- 100% up time
- Ignore gross liquid or gas handling capability or well capacity
- Constraints due to government regulation
- Black swan effects
The best way to handle delay is to base our schedule on our best guess but also analogous data. It is a common but incorrect assumption that project delays will lessen because of general experience and better scheduling software. However, our projects are increasingly complex and therefore there is more and more components that interact. There is new technology in many new projects. Also most large projects have a fair percentage of inexperienced staff. Finally even experienced teams have to learn how new technology interacts. Therefore there is still a strong learning time component with new large projects. Learning curves for operation and field staff should be built into field forecasts.
The third paradox mentioned is the lack of progress in improving accuracy despite large improvement in computation power and better reservoir characterization. However, it is understandable that we still have shortfalls in our forecasts because of the additional complexity in our projects. If we think of the oil industry in the 1930’s vs. current day 2015, there is huge interdependency of technology now compared to the past. In the 1930’s most oil production worldwide came from land based oil fields in primary production with simple lifting system or no artificial lift. Now we have complex offshore rigs in very difficult environments with complex wells or heavy oil enhanced oil recovery (EOR) systems. In addition for both offshore and EOR, usually a handful of key wells rather than a distributed system controls the overall project oil rates. This means that well failure of a few wells can hurt the oil productivity substantially. Finally most large projects are now injection supported which make them more sensitive to permeability heterogeneity. Due to the interaction of all the components it is not surprising that forecasts still have wide error bars. Injection processes are prone to more “known unknown” effects compared to primary oil drive because of reservoir heterogeneity. To counteract that phenomena, geostatistical techniques were developed.
The phenomena of underestimating the time required to understand complex projects is not unique to the oil industry. Humans seem to be optimistic in building time lines for complex projects.
”The common tendency to underestimate time needed for a complex and uncertain series of events to take place to the inability of most people to estimate correctly the probability or conjunctive event, an event requiring the success of several stages”. Tvarsky and Kahnemen
It is important to understand the common psychological mistakes made during the forecasting process. These common mistakes are not unique to the oil industry and many forecasts fields suffer from them. There are five common biases that creep into our forecasts;
- Planning Fallacy
- Survivorship Bias
- Over simplistic initial mental models
- Over confidence in our answers and ability
- Anchoring Effect
It is very common to see intense focus on “internal” factors and relative neglect of “external” factors. Ironically sometimes experts and dominant members of the team can contribute to this phenomenon.
A large majority of paper publications and the general press like to report on success stories. In general companies and people do not like to discuss failed projects. Because of reporting of successes only, the sampling is biased. Therefore, individuals underestimate risk in projects, especially if they have no direct experience in the area or technology.
Over simplification of our initial mental models
There is oversimplification of our initial mental models when we initially learn about new technology or a new system. We use simplified mental models to understand the concepts. But in many cases the controlling variable may be hidden or not transparent in those models. Often complex technology requires years of effort to determine the interaction of its components. Esso paper on Cold Lake This is due to the interaction of individual components that cannot be determined before the project is started .
There are four problems associated with oversimplification of our models, these are;
- We ignore the learning time associated the reservoir and production system (described above)
- We collect data that confirms the simple model
- We tend use average properties when the extremes of the data control the system
- We neglect “grey swan” effects
Because of the oil industry’s cost conscious behavior, sometimes only routine data is collected and the Occam’s razor philosophy we collect and consider minimum data in many fields. Because of the limited data and small sample size our initial models are simplistic. In other words, we tend to believe initially in the simplistic model often because we do not collect data to prove out another possibility.
- We implicitly assume that average core permeability is approximately equal to the large scale effective permeability. In many situations especially in naturally fractured reservoirs or poor quality reservoirs, average core permeability is not equal to the large scale effective permeability. In naturally fractured reservoir (NFR) we do not use a fracture image log or take horizontal core because we do not believe the reservoir is fractured, instead we use average matrix permeabiilty.
- In many simulation models we initially assume 100% continuity of major pay zones. However in most reservoirs, the super permeable zones or the super low permeability regions control flow. Bevens ~1989 Beaver River Amoco study North Sea Dake. Sometimes these regions are only a very small fraction of the total pore volume so using average permeability in a model incorrectly forecasts the future.
- In most conventional workflows and flow simulation studies we implicitly assume that near wellbore effects are second order effects. We neglect grey swan effects. Therefore when we history match a field we typically adjust far field or inter-well permeability but there are many cases where near wellbore plugging chokes off the well and the reservoir flow mechanics is controlled by the near wellbore region.
- Finally an excellent example of the use of overly simplified models is in hydraulic fracturing. We use simple infinite conductivity linear manmade fracture (2D) static models. These modes have half lengths equal on both sides of the well. Yet we know from interferences tests, micro-seismic, tilt metering, pressure observations, temperature logging and Interwell tracer test that fractures are three dimensional and complex in nature. In addition pressure depletion, changing stress fields, faulting and matrix permeability mean complex induced hydraulic fractures are affected by poro-elastic effect and temperature effects so manmade fractures may change with time. Faults, natural fractures and geology mean that manmade fractures are dendritic (tree like structure) in nature and asymmetric (Wright) (Warpinki). Finally, it should be noted that Induced hydraulic fractures may have substantial pressure drops associated within the fractures. It is therefore not surprising that these single phase flow linear models deviate from actual field performance especially at longer time intervals (t> one month).
Over confidence in our answers and ability Generally people, especially inexperienced or layman, tend over estimate their ability to forecast a project.
“Having no good quantitative idea of uncertainty, there is an almost universal tendency for people to understate it. Thus they overestimate the precision of their own knowledge and contribute to later and unwelcome surprises.” Cappen 1975
Anchoring Effects Finally both forecasters and look back personnel need to be aware of anchoring effects. It is common for groups of people or experts to be systematically biased toward the initial value of estimate. Tversky and Kahneman (1974) In other words if values of permeability and water saturation are used they are likely to be reused and reused and not critically examined. In my experience values for Original Oil in Place (OOIP) are very likely to have a large anchoring effect. Despite large amount of water production in some heavy oil reservoirs, people refuse to discuss lower OOIP and water influx.
Groups of forecasters can be highly influenced by recent events and external influences, so in good economic times estimates are more optimistic and in hard economic times estimates are more pessimistic:
“The evaluator (engineer or geologist) does not operate in a vacuum. He is influenced by what he perceives as his superior’s expectation. Projects compete with other projects for approval and funding and as with any competition; there is the possibility of increasing the chances for approval by making the project more attractive.” Brush and Marsden 1991
Solutions for Our Bias It may seem that forecasting is hopeless. However there are some simple solutions to the biases mentioned above:
- It is important to focus on key issues.
- Encourage regular critical reviews of field performance with multidiscipline teams to encourage feedback loops to improve forecasts (Peer review).
- Do not anchor your opinions too early but wait until you have seen the majority data set as well as getting opinions from other disciplines.
- Use a systematic approach, follow a check list.
- Foster group input; make sure to get input from all members not just the more vocal or powerful ones.
- Use experience and diversity of experience to identify “grey swans,” This experience is especially useful if it comes from outside sources. “Always look first at what is not being modelled” Michael Schrage from book; “The Flaw of Averages” Sam Savage 2009.
- Don’t fight complexity with complexity
- Learning curves for operation and field staff should be built into field forecasts. Even experienced teams have to allow time for learning curves. Nandurdikar 2014
- Remember that production forecasts are created by imperfect humans. Most problems will be due to the interaction of components as well as implicit assumptions; be humble in your work as well as your criticisms. In my experience it is often not bad intentions or lack of skill but missing data or grey swan effects that cause mismatches in forecasts. Reservoir characterization of a reservoir and production system takes time.
- Keep experienced members on teams with a centralized accountability system. Nandurdikar 2014 mentions lack of single point accountability and staff change over as major problems in some projects
It is important to compare the actual performance with the model’s forecast on a regular basis (look back). As a rule of thumb look backs should be performed yearly for the first 3 years, then once every three years. One of the more successful forecasting examples is weather forecasting. Regular updates, more data collection and tie of both empirical as well as physical model has improved short term weather forecasting dramatically in the last 30 years.
The frequency of that comparison should also be related to the economic importance of the field and the uncertainty in the input parameters. The more uncertainty in the data the more frequent the look-backs need to be. Look backs are especially valuable early on in the process. It is recommended that the first look-back be done six month to a year after the implementation of waterflood/EOR or at initial breakthrough of fluids.
Reservoir characterization is a slow iterative process that takes time, thought and data to be effective. Crucial factors in correcting biases are early and appropriate data acquisition and the expertise of the petroleum geologists and engineers involved in the study. A diversity of reviewer can also be very helpful in coming up with a better solution. The Wisdom of Crowds; James Surowiecki; random House 2005