“Recognizing that the amount of confidence someone expresses in a prediction is not a good indicator of its accuracy – to the contrary, these qualities are often inversely correlated. Danger lurks, the in the economy and elsewhere, when we discourage forecasters from making a full and explicit account of the risks inherent in the world around us.” – Nate Silver
In My 2014 Market Forecast I didn’t really make any predictions. The point of the post was to show how difficult it is to make a short term prediction on the state of the markets.
There are many very intelligent people from the world of finance who have a difficult time making predictions.
To help explain how forecasting can go wrong, look no further than your local newsroom’s weather team.
In The Signal and the Noise, forecaster extraordinaire Nate Silver describes the process of predicting the weather.
Basically, there are two forms of weather forecasting services. One is done by the government, which you can find at weather.gov. This service gives fairly basic weather outlooks backed by computer models. The other weather service is of a commercial variety, either through The Weather Channel or the news.
It’s still not a precise science, but unlike economic forecasting, meteorologists have gotten more accurate with their predictions, mainly through improved technology.
The problem is that the forecasts are good but not perfect. They combine computer models and human judgment. Humans can obviously make them better or worse.
Since the government system doesn’t rely much on human judgment, it tends to be fairly consistent.
When the government weather service says there is a 20% chance of rain, it really does rain 20% of the time.
When Weather Channel forecasters predict a 20% chance of rain, it only rains 5% of the time. They even admit to doing this to gain more viewers.
One study found when a local meteorologist predicted 100% chance of rain it failed to do so one-third of the time.
Here’s Silver’s take from the book:
For-profit weather forecasters rarely predict exactly a 50% chance of rain, which might seem wishy-washy and indecisive to consumers. Instead, they’ll flip a coin and round up to 60, or down to 40, even though this makes the forecasts both less accurate and less honest.
Most commercial weather forecasts are biased toward forecasting more precipitation than will actually occur – what meteorologists call a “wet bias.” The further you get from the government’s original data, and the more consumer facing the forecasts, the worst this bias becomes. Forecasts “add value” by subtracting accuracy.
Presentation of the story by the news team takes precedence over accuracy of the actual forecast. They think it’s no big deal to the viewers. Since the public doesn’t think the forecasts are good anyways, who cares? This creates a vicious cycle of inaccuracies.
The same dynamic of human nature taking over happens with market and economic soothsayers.
Pundit Tracker, a website that seeks to follow expert predictions and show their track records to hold them accountable, describes the current state of financial pundits this way:
A pundit simply has to use the following playbook:
1. make a bold prediction
2. if wrong – keep quiet; don’t worry, no one will remember
3. if right – self promote, write a book, go on TV, etc
Not a bad gig if you can get it.
Obviously, there are people out there that can make level headed forecasts about the future without sensationalizing their predictions in 30 second TV sound bites.
There are some people out there worth listening to in the financial complex.
You just need a solid filter to be able to weed out the useful commentators from the typical self-serving pundits.
Silver provides an excellent breakdown of good and weak forecasting attributes in the book.
Good forecasters tend to be multidisciplinary (they get their ideas from different disciplines), adaptable, self-critical, tolerant of complexity, cautious and empirical (utilize observation over theory).
On the other hand, weak forecasters are usually specialized, stalwart (all-in approach to their models), stubborn, order-seeking, confident and ideological (grand theory explains it all).
Silver also gives some solid tips at the end of the book for becoming a better forecaster.
You must think probabilistically. This allows you to make room for uncertainty while making educated guesses on how things will turn out.
You should also build on past experiences to establish a baseline which your predictions will evolve from. That way you can be honest about any possible biases that may creep into your logic .
Trial and error is also important. You must be flexible in order to update your line of thinking when new information is presented. The majority of mistakes I’ve seen from investors usually come from those that are unwilling or unable to deviate from their prior convictions.
Humans typically err on the side of overconfidence to their detriment so stay away from beliefs that are all or nothing.
And finally you must be able to accept uncertainty as a fact of life if you ever want to be a be to separate the signal from the noise. Silver says we should have the “serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference.”
This is not an easy undertaking. The first step is to focus on what you can control. The rest will come with experience and a solid BS detector.
Can weather move the stock market?
Silver mentions one of my favorite episodes of Curb Your Enthusiasm in The Signal and the Noise about Larry David and his conspiracy theory about his local weatherman. Here’s a clip: