“Humility is attainable, even if forecasting accuracy is not.” Philip Tetlock
As the 2018 market outlooks begin to flood my inbox I’m reminded of one of the best books written on the topic of expert forecasting abilities — Expert Political Judgment by Philip Tetlock.
Tetlock released the first version of this book in 2006 but re-released an updated version this past summer. In the foreword, he said he was surprised by the success of the book when it first came out but it makes sense when combined with the sheer number of experts who have been made to look silly over the past 10+ years or so.
The book covers his studies on recognized experts in a number of fields — politics, economics, markets, etc. — using over two decades of research. His main takeaway is the so-called experts didn’t do much better at predicting what was going to happen than random chance.
Tetlock doesn’t simply excoriate pundits like many do these days. He looks at all sides of the argument, offers some reasons why experts are often wrong, and even gives them some credit for areas that they help in these types of endeavors.
One of my favorite passages from the book discusses the topic of counterfactuals, which is something many people don’t take into account when looking through the lens of history:
Learning from the past is hard, in part, because history is a terrible teacher. By the generous standards of the laboratory sciences, Clio is stingy in her feedback: she never gives us the exact comparison cases we need to determine causality (those are cordoned off in the what-iffy realm of counterfactuals), and she often begrudges us even the roughly comparable real-world cases that we need to make educated guesses. The control groups “exist”—if that is the right word—only in the imaginations of observers, who must guess how history would have unfolded if, say, Churchill rather than Chamberlain had been prime minister during the Munich crisis of 1938 (could we have averted World War II?) or if, say, the United States had moved more aggressively against the Soviet Union during the Cuban missile crisis of 1962 (could we have triggered World War III?).
But we, the pupils, should not escape all blame. A warehouse of experimental evidence now attests to our cognitive shortcomings: our willingness to jump the inferential gun, to be too quick to draw strong conclusions from ambiguous evidence, and to be too slow to change our minds as disconfirming observations trickle in. A balanced apportionment of blame should acknowledge that learning is hard because even seasoned professionals are ill-equipped to cope with the complexity, ambiguity, and dissonance inherent in assessing causation in history. Life throws up a lot of puzzling events that thoughtful observers feel impelled to explain because the policy stakes are so high. However, just because we want an explanation does not mean that one is within reach. To achieve explanatory closure in history, observers must fill in the missing counterfactual comparison scenarios with elaborate stories grounded in their deepest assumptions about how the world works.
Life is much more random that many would like to admit so we create stories and narratives to explain the way the world works through our hindsight bias. Tetlock actually came up with five ways our preconceptions shape a pundit’s view of reality:
1. Experts can talk themselves into believing they can do things they cannot. There are diminishing returns to knowledge in the prediction game but overconfidence often trumps this fact.
2. Experts are reluctant to admit when they’re wrong and change their minds. This is our cognitive dissonance on display.
3. Experts fall prey to the hindsight bias. People convince themselves they “knew it all along” even when completely unpredictable events occur (or they just plain missed it).
4. Experts fall prey to confirmation bias. Experts have a hard time viewing the other side of an argument.
5. We’re all patterns seeking creatures. We look for structure or consistency where none often exists in the real world, which is quite random most of the time.
In Organizational Alpha, I touched on different ways to make better decisions and one of them was to avoid making excuses or casting blame on others. I used some previous work Tetlock had done in another research paper:
You’re not always going to get the best outcomes but that’s not the point of a good process. A good process is about making high-probability decisions. No one is good enough to be right all the time.
Philip Tetlock has spent his career studying forecasters, how accurate their predictions are (not very) and the typical excuses they make when they’re wrong.
Throughout his research he came up with five excuses that experts make on a regular basis when they’re wrong:
1. The “if only” clause: If only this one thing would have gone my way I would have been right.
2. The “ceteris paribus” clause: Something completely unexpected happened so it’s not my fault.
3. The “it almost occurred” clause: It didn’t happen but I was close.
4. The “just wait” clause: I’m not wrong, I’m just early.
5. The “don’t hold it against me” clause: It’s just one prediction.
I’m definitely not anti-pundit or anti-forecast. We’re all forced to make decisions in the face of unavoidable uncertainty. But going through Tetlock’s work is always a great reminder for me that (a) predicting the future is hard, (b) that fact won’t stop people from doing it, and (c) it makes sense to be as humble as possible when thinking about how the future will unfold.
This is worth remembering when you see your favorite pundit on TV, in print, or on social media in the coming months with their 2018 outlooks. There’s a high liklihood most will be wrong.
Source:
Expert Political Judgment: How Good is It? How Can We Know?
Further Reading:
No One Knows What Will Happen