Rafael Sandrea, PhD
Uncertainty is the only certain thing about the oil business and within this context we have to deal with the realities of any project evaluation be it exploration, field performance or the final lap of EOR. The techniques and tools for dealing with uncertainty are many and well documented. My interest here is to draw attention to a few examples of misuse that are sometimes unwitting or disingenuous but can bring about unwanted or useless results.
Let’s look at the genesis of an upstream project evaluation which starts with the estimation of newly discovered reserves that in turn shape the production profile which finally translates to cash flow – the center of economic evaluation. The estimation of initial reserves requires the use of the volumetric model for lack of field performance history which is the ultimate decisive tool to define reserves. The deterministic or single point approach to calculating volumetric reserves is a routine engineering/geologic exercise that requires solo values for the individual reservoir parameters – porosity, saturation, sand thickness, etc. This makes it simple to calculate and the numbers are easier to remember.
Should we use the mean or the median to define these solo values? As a general observation, the mean, or average of all data points in a set, is influenced by extreme values referred to as outliers while the median discards the outliers. This basic principle would define the choice between mean and median. For instance, when making size comparisons of say the 20 largest oil and gas fields discovered, the outliers (or mega giants like Ghawar & North Field) are very important and the mean would be more relevant than the median. In large data sets both the mean and the median are close anyway. There is a third measure, the mode - the value most likely and the one with the highest probability as it occurs most often in the data set. However, since it is not unique it is not often used.
Overall, volumetric estimation of reserves is so cloaked in uncertainties that a probabilistic approach rather than deterministic is the preferred course. Monte Carlo or similar simulation is the accepted technique. 1P or 90% certainty reserves as required by the SEC can only be defined with this methodology. The volumetric model technically only calculates the oil-in-place; a recovery factor has to be assumed in order to convert these resources to reserves. And this can be a skewed choice for new fields; hence field performance is the ultimate arbitrator of how many barrels are actually recoverable. Comparisons (with giant oil & gas fields, post 1970) between volumetric (initial) and field performance (later) estimates indicate a tendency for the volumetric to be larger, as much as 25% higher, attributable mostly to the choice of the recovery factor.
The next step towards monetizing the now defined reserves is to generate a future production profile of the field based on assumed relative strengths of different drive mechanisms that may be active in the various reservoirs of the field. Again, these assumptions can only be validated in time with performance. Perhaps the largest risk in project evaluation is ‘decline and depletion’ – a concept that baffles many analysts, is largely concealed but harshly affects the production profile. To use an observable example, let’s take a look at forecasting oil supply. This subject has become increasingly thornier as imbalances in supply and demand have become more frequent with the undesirable consequences of price peaks, bubbles or whatever you want to call it.
The major institutions involved in production outlooks (EIA, IEA, IHS, Wood Mackenzie, to name a few) all base their forecasts on a ‘pancake’ model wherein production profiles from new field discoveries, EOR and unconventionals, are simply stacked (hence the name pancake) on the existing historical production profile. This technique not only introduces highly biased data, both volume and time wise, into the updated outlook profile, but more importantly it neglects the strongly non-linear interference effects of competing resource inputs. A clear case in point: global crude oil production was at a capacity high of 63 mb/d in 1979 and 30 years later it has barely grown 9 mb/d to 72 mb/d. During the recession of the 1980s, 10 mb/d was shut-in (which gives extra available capacity) and over the last 3 decades 400+ billion barrels of new reserves have been discovered - equivalent to 2 X Saudi Arabias, 6 X North Seas. So why has production capacity increased a paltry 9 mb/d? Decline and depletion effects.
The pancake model forecasts global crude oil outputs of 82 mb/d by 2020 and 93 mb/d by 2030, increases of 12 mb/d and 20 mb/d, respectively, from the current level. Extremely optimistic to say the least when in the previous 30 years actual field output has increased only 9 mb/d !
The final step to monetization of the reserves asset – discounted cash flow - requires assumptions regarding several economic parameters including oil prices which are notoriously volatile to predict. Oil price - an indispensable iniquity for any economic evaluation – is undoubtedly the single, most important and visible risk factor in project evaluation. I’ll leave this dicey topic to John Tobin (“Why Bother…”, PennEnergy Research 2010) who wallows in the explanations, although the results are the same: they are going to be wrong anyway! There is no model that can predict even next-day values of NYMEX while the evaluation process requires that we ‘estimate’ oil prices for the duration of the project, maybe 2+ decades. So how can we get around this predicament? The best suggestion is to take the long-term view that oil prices will always keep pace with inflation and conduct a sensitivity analysis on prices and production at different levels. THANK YOU.
Talk given at the USGS Reserves Growth Conference, Denver, Aug. 3-5, 2010.
For more of Dr Rafael Sandrea’s work on global oil and gas resources see: An In-Depth View of Future World Oil & Gas Supply - A Quantitative Model which is available online through PennEnergy.com.