On the difference between Data Distributions and Uncertainty Distributions

There is a subtle, but important difference between data distributions and uncertainty distributions that is often overlooked, and mis-understood in Quantitative Prospect Evaluation.

In Quantitative Prospect Evaluation we need to estimate properties of several variable (like thickness, porosity, Net-to-Gross etcetera). While doing that we realize that all these estimates have uncertainties, for which reason we create Uncertainty Distributions of all these parameters. These uncertainty distributions are then used as input distributions for a Probabilistic Volume Calculation.

Then what is the distinction between Data Distributions and Uncertainty Distributions?

I like to explain that with a simple analog. In this example I try to estimate the height of girl with the blocked shirt in this class photo We are in the fortunate situation that we know the height of all the other persons in this photo. So, by comparison we can estimate the height of this girl. Now the question is what is the Uncertainty of this estimate? The distribution of all the data-points tells us something about the population (the Data distribution), but it tells us nothing about the uncertainty of the estimate of this girl (the uncertainty distribution). The latter depends on the method and shortcoming of how me made that estimate

Antique Class foto.JPG

This happens to be a common misunderstanding in Quantitative Prospect Evaluation. I think the cause of that misunderstanding is twofold:

• Several software products perpetuate this misunderstanding and use “The Distribution” without specifying what type of distribution it is.

• Lack of training in Quantitative Prospect evaluation and the associated MonteCarlo methods to create probabilistic volumes. Many geologists are just given a piece of software, without the proper instruction on how to use it.




Our perception of risk is our reality.


Our perceptions are the ‘personal realities’ we construct from incoming data, processed through our cognitive and sensorial machinery, shaped and moulded by our personalities, gender, cultural background, knowledge, expertise and life experience. They are highly individual and highly subjective. 

This is how we assess and perceive risk. This is what determines our appetite for risk, our risk aversion and ultimately our judgement of risk. And in a world of risks and opportunities, the closer our personal ‘subjective realities’ align with true ‘objective reality’, the better our chances of thriving and surviving, of choosing the best opportunities and avoiding the worst disasters.

And the same is true across the business world and in the sciences.

In our assessment of risk in business and the sciences, we strive to make objective interpretations and predictions. Objective interpretations and predictions aim to be as close to true reality as possible. They are fact and evidence-based. They are transparent, repeatable, measurable. And they are as free from cognitive bias, groupthink and personal influences as possible. How do we do that? 

The short answer is that we employ a range of practical methods and tools, which provide a common and consistent framework of logic, which is based on all the available facts and evidence. Developed and tweaked over many years by teams of technical experts, these methodologies slowly evolve and improve over time, to narrow the gap between predicted outcomes and actual outcomes. 

That said, even the most carefully executed risk analysis must ultimately pass through the ‘perception filters’ of our leaders and decision-makers. And while the ’gut-feelers’ and the ‘Dunning-Krugers’ are at large and easy to spot, perhaps the greater enemy of objective reality lies in the false security of our reliance on ‘soft’ statistics.  

Take a simple dice throwing example. Given the probability of throwing a 6 is 1/6, we might ‘intuitively feel’ that it takes 6 throws of the dice to deliver a 6. In fact, throwing at least one 6, in the first 6 throws, has a confidence of only 67%! To score at least one 6 with 90% confidence takes around 13 throws of the dice. Why? Because for small sample sizes, probabilistic outcomes are only predictable with low confidence. As sample size increases, confidence in scoring on average one 6 in 1/6 of the throws increases, approaching 100%. 

And the same holds for a portfolio of oil and gas exploration prospects. 

Typically oil and gas prospects are associated with a Chance of Success (CoS) to find a given volume (defined as a probabilistic range). And similar to the dice example, it’s easy to see why we might ‘intuitively feel’ that drilling a simplistic portfolio of 4 identical and independent prospects, each with a CoS of 25%, would deliver around one success. In fact, for the same reasons, 8 ‘identical' prospects need to be drilled to deliver at least one ‘technical success’ with 90% confidence (defined as a success volume in the predicted range, V>P0). More critically, it takes 12 identical prospects to deliver one ‘commercial success’ with 90% confidence (assuming economic success volume cut-off V>P30). And interestingly, some 20+ identical prospects must be drilled to deliver at least one success volume in the mid-range of the probability distribution with 90% confidence (Swanson: P30<V<P70).

Richard Barret Post.JPG

There are three clear messages for oil and gas explorers:

  • Portfolio decisions are driven more by our perception of risk than the objective reality of risk. Aside from the ‘gut feel’ syndromes, the probabilistic methods we employ to predict portfolio success rates (CoS), hydrocarbon success volumes, and not least $ profits, for most explorers, are under-sampled and therefore carry low confidence.

  • Predicted portfolio success rates, success volumes and risked profits (EMV) are achievable only for portfolios of significant size. For ‘small’ portfolios, probabilistic predictions simply cannot be made with high confidence. How often, in promising exploration portfolio volumes to our investors, do we associate the portfolio volume expectation with a confidence? 

  • Post-drill ‘look-back and learn’ studies on small sample probabilistic predictions have low confidence. A significantly large sample of outcomes is required to assess subjective, systemic bias or just plain error in our risk assessment capability. This applies equally to prospect risk and volume assessments and to component play elements (i.e trap, reservoir, Hc charge, seal). 

This conundrum is further addressed in the following linkedin article where a range of mitigating approaches are also discussed: Finding Oil and Gas - Luck or Judgement (September 2019): https://www.linkedin.com/pulse/ding-oil-gas-luck-judgement-richard-barrett/

Richard Barrett, May 7, 2020

Click here for Original Post on LinkedIn