The Limitations of Algorithms

It would be an understatement to say that the election result caught some people off-guard (just not Dave Chappelle or Chris Rock).

When these things happen, rational people try to learn from their mistakes by showing some humility. Political discussions are rarely rational, though, so the past week or so has been filled with hindsight bias, hubris and denial. Everyone is trying to figure out that single variable that will explain why millions of people did what they did even though they all have different goals, agendas, personal opinions and reasons for their actions.

People have spent a lot of time criticizing the polling models that turned out to be partially or wildly inaccurate at predicting the future (go figure), but the politicians themselves have also turned to technology to help on the campaign trail.

For the 2008 election, President Obama hired a product manager from Google to head up his media analytics team. They put to work technology and strategies that hadn’t been used before and it gave them a huge edge in a number of different areas, including increased donations and understanding different regions of voters around the country.

As outlined in the book Algorithms to Live Bythe majority of these first mover advantages were gone by Obama’s reelection efforts in 2012:

We know what happened to Obama in the 2008 election, of course. But what happened to his director of analytics, Dan Siroker? After the inauguration, Siroker returned west, to California, and with fellow Googler Pete Koomen co-founded the website optimization firm Optimizely. By the 2012 presidential election cycle, their company counted among its clients both the Obama re-election campaign and the campaign of Republican challenger Mitt Romney.

Within a decade or so after its first tentative use, A/B testing was no longer a secret weapon. It has become such a deeply embedded part of how business and politics are conducted online as to be effectively taken for granted. The next time you open your browser, you can be sure that the colors, images, text, perhaps even the prices you see— and certainly the ads— have come from an explore/ exploit algorithm, tuning itself to your clicks. In this particular multi-armed bandit problem, you’re not the gambler; you’re the jackpot.

This is how competition works. Early adopters reap the biggest gains which attracts competitors who come in search of that same edge. Eventually this levels the playing field and competitive advantages slowly subside.

The Clinton team tried to take things a step further in their efforts to use technology to their advantage this time around (as described by the Washington Post):

What Ada did, based on all that data, aides said, was run 400,000 simulations a day of what the race against Trump might look like. A report that was spit out would give campaign manager Robby Mook and others a detailed picture of which battleground states were most likely to tip the race in one direction or another — and guide decisions about where to spend time and deploy resources.

The use of analytics by campaigns was hardly unprecedented. But Clinton aides were convinced their work, which was far more sophisticated than anything employed by President Obama or GOP nominee Mitt Romney in 2012, gave them a big strategic advantage over Trump.

To state the obvious, this model didn’t help all that much:

About some things, she was apparently right. Aides say Pennsylvania was pegged as an extremely important state early on, which explains why Clinton was such a frequent visitor and chose to hold her penultimate rally in Philadelphia on Monday night.

But it appears that the importance of other states Clinton would lose — including Michigan and Wisconsin — never became fully apparent or that it was too late once it did.

Again, there is no single variable that can explain the results of something as complex as a presidential election. This does, however, offer a good lesson in the limitations of the use of technology in our efforts to predict the future.

There are a number of studies that have shown that algorithms are typically better at making decisions that humans because they are disciplined and rules-based. They don’t allow emotions to cloud their judgement like we do.

Yet there are always going to be variables you can’t map out in a model. You can’t teach it common sense or human emotion. You can’t really model the future or random, unexpected events. The outputs are only as good as the inputs so it’s always going to be garbage-in, garbage-out with these things.

And with greater use of technology in many facets of life going forward the biggest beneficiaries will typically be those who get there first, not the second or third adopters.

As algorithms become more prevalent in our lives and the decision-making process it’s worth remembering both their strengths and limitations. These things are not infallible. They can help make our lives more efficient, but they are not yet the be-all, end-all. Our interpretations of the outputs will still play a large role in the success or failure of these models.

And as the adoption rate increases and more and more people put them to use it will still be up to the humans who are operating the algorithms to help differentiate between huge mistakes or successful outcomes.

An overreliance on technology may be one of the biggest mistakes people make in the future as overconfidence may shift from our own abilities to those of an algorithm.

Sources:
Algorithms to Live By
Clinton’s data-driven campaign relied heavily on an algorithm names Ada. What didn’t she see? (Washington Post)

Further Reading:
The Problem With Intuitive Investing

Now here’s what I’ve been reading this week:

  • The 2 financial numbers you need to know (Jonathan Clements)
  • Demographics: Boomers have already been overtaken by the millennials (Fat Pitch)
  • Here’s why your portfolio won’t be trumped (Washington Post)
  • Financial lessons from the 2016 election (Oblivious Investor)
  • An evidence-based low volatility discussion (Alpha Architect)
  • How two trailblazing psychologists turned the world of decision science upside down (Vanity Fair)
  • Good luck university (A Teachable Moment)
  • Sustainable sources of competitive advantage (Collaborative Fund)
  • “If you’re making a decision about the future, which is by definition unknown, go with regret minimization.” (Irrelevant Investor)
  • The impact of women on corporate leadership (Thirty North)
  • Podcast: Marc Maron put together a compilation of stories from old SNL guests on his show (WTF)

 

 

Download PDF

Full Disclosure: Nothing on this site should ever be considered to be advice, research or an invitation to buy or sell any securities, please see my Terms & Conditions page for a full disclaimer.

  • patrick k

    I think “divine intervention” is more likely….just saying. ;~)

  • Eric Weigel

    Good points that anybody that has built models understands. People are used to seeing point estimates because market participants incorrectly reward confidence and precision (just look at sell side strategists with their year-end projections).

    What most people fail to see is that a point estimate is just one possible choice out of many within the forecast distribution.

    While I do not have the data I would be shocked to see some of these Clinton negative “surprises” not fall within the, say, 95% forecast error bands.

    When dealing with a model always seek to understand the likely probability distribution of the forecast. Money making opportunities always have, in my experience, wide distribution bands. That is one of the reason why I worry about the unquestioned use of machine learning algorithms in finance.

    All of this is not to say that models are useless. Models in most cases are a very reasonable starting point when properly structured (ie we have asked the right question) but an experienced human at the helm is absolutely necessary to understand the context and associated uncertainty.

    Eric Weigel
    Global Focus Capital
    eweigel@gf-cap.com

  • As an experienced model builder, I have found that models are more likely to fail when (a) garbage-in, garbage-out (GIGO), and/or (b) the models are linear but future events are non-linear. I think both came into play during this election cycle. Some statisticians were pointing out the likelihood of GIGO back in early October, and clearly there were some non-linearities going on as well.

    Nick de Peyster
    http://undervaluedstocks.info/

  • George Reed

    Thanks Ben for your insightful blogs. Trump’s IT team predicted his win a few days before the election. See this Megyn Kelly interview with Brad Parscale: http://insider.foxnews.com/2016/11/15/donald-trumps-digital-guru-explains-how-they-won-election