Why more positive ‘counterfactual reasoning’ is better than reactive ‘disconfirmation’ in dealing with extremely uncertain situations.
Discussion of risk management in the past two decades has often revolved around two catchy phrases: the “known unknowns” coined in 2002 by then-US Defence Secretary Donald Rumsfeld, and the “black swans” of the 2007 book (The Black Swan) by Nassim Nicholas Taleb, which focused on how we deal with unpredictable or outlier events.
Both phrases deal with uncertainty in strategic decision-making, and a common approach to dealing with such ambiguity has been “disconfirmation”. This entails testing an assumed hypothesis by deliberately looking for new, inconsistent evidence – and if such evidence is found, Hypothesis 1 is discarded and replaced by a new Hypothesis 2, which is then tested anew. The more tests passed by a hypothesis, the greater confidence there is in such hypothesis.
This sounds straightforward and effective, so why look further than disconfirmation to tackle situations of extreme uncertainty?
There’s a simple reason why disconfirmation, though often impressive on paper, usually fails in the real world of business: because even if new Hypothesis 2 passes more tests than its predecessor hypothesis, there’s still plenty of remaining ambiguity and uncertainty.
In the real world of project management and budgets, a vice president for finance may conclude: “You’re telling me that we are missing something in Hypothesis 1 and have to start from scratch after we’ve invested millions in this project? But you’re also not really sure about Hypothesis 2, even though it’s passed a few more tests. So we’re not going to throw out Hypothesis 1: it’s close enough to what we were looking for, and you don’t really have proof that Hypothesis 2 is better. All you really know is that Hypothesis 2 has passed more tests so far – I repeat, so far.”
The term for this sort of very common thinking is “escalation of commitment” – a tendency not to change course following significant investment (which may be financial, social or reputational) even when faced with a rising risk of a negative outcome in sticking to the present course. Skin in the game can translate into passion and commitment, which are often positive, but it can also lead to a blinkered approach that digs deeper holes if bad news arises.
Yet there is an alternative and far more positive approach to “disconfirmation”. As argued in a recent paper co-authored by colleagues at the University of Cambridge, SOAS University of London, Cass Business School and myself, “counterfactual reasoning” is often a better approach when dealing with situations of extreme uncertainty.
Rather than seeking to poke holes in something (the reactive approach employed in disconfirmation), “counterfactual reasoning” instead seeks to construct possible alternatives (“can you think of something else that might be going on here?”) and then actively seek evidence in favour of possible alternatives – a truly different view of the world rather than something that merely passes a greater number of (still inconclusive) tests. Under counterfactual reasoning, there is a creative activity, namely inventing an alternative hypothesis that is consistent with the existing evidence, and then more evidence is sought that is also in line with this alternative (and inconsistent with the original hypothesis).
So let’s take the example of a furniture manufacturer planning to produce a new chest of pine drawers. Under a “disconfirmation” approach, there would be tests on the strength of the drawers themselves (can they hold the weights anticipated), whether pine is in fact the best material, and the price of key components such as drawer handles. In contrast, someone following a “counterfactual reasoning” approach instead says: “I’ve been thinking about this for a long time, and what customers really care about is the design of this piece of furniture more than the material itself, so I’ve come up with a radical new look that will generate buzz and sales.”
With apologies to Mr Rumsfeld (and artists everywhere), this approach finds new and positive “known unknowns” (the very creative proposal of them makes them “knowable”) that paint a bright new landscape rather than seeking “unknown unknowns” that loom somewhere and might somehow darken the original canvas.
A counterfactual reasoning approach looks at a completely different driver to what was being discussed previously. It requires that people jump over a hump and say: “I’m actively looking for an alternative instead of looking for evidence that what you’re doing is wrong.” It’s definitely more work, but more work often yields superior results – and counterfactual reasoning is no exception.
Great article and insight!
Thanks for an insightful piece. I look forward to investigating how counterfactual reasoning as an approach can potentially help risk managers improve and avoid certain cognitive biases.
Dr. Aynur Unal
Decision Making Under Uncertainty can be dealt quantitatively by using Hybrid AI & Fuzzy Reasoning and have been successful used in engineering. Hybrid ANN for example is used in nuclear safety in both USA and Japan.
Stanford Law School has new initiatives in this regard.