Why Experts are Almost Always Wrong

No one, not even the experts, really knows what’s about to happen

This crystal ball won’t help you.
This crystal ball won’t help you. lukesaagi

Every time there’s a national disaster, a gigantic event, a shooting, a breakthrough, really any news at all, you can rely on television news to find an expert. Some of them know quite a lot about what happened, what will happen, and why. But when it comes to a lot of experts, they really have no idea what they’re talking about.

Blogger Eric Barker points out that political experts’ predicitons are only slightly better than a random guess, and way worse than a statistical model. In fact, so called experts were better at predicting events outside their own field. Barker points to a study from the 1980′s, when Philip Tetlock had 284 political “experts” make about a hundred predictions. The study is summarized in the book Everything Is Obvious* Once You Know the Answer:

For each of these predictions, Tetlock insisted that the experts specify which of two outcomes they expected and also assign a probability to their prediction. He did so in a way that confident predictions scored more points when correct, but also lost more points when mistaken. With those predictions in hand, he then sat back and waited for the events themselves to play out. Twenty years later, he published his results, and what he found was striking: Although the experts performed slightly better than random guessing, they did not perform as well as even a minimally sophisticated statistical model. Even more surprisingly, the experts did slightly better when operating outside their area of expertise than within it.

Another study found that “experts” who try to predict the outcome of Supreme Court cases weren’t that much better than a computer. The world saw evidence of that in their recent decision about health care, surprising nearly every “expert” out there.

But that’s politics. Other fields should be better, right? Nope. Technology is the same way. Another scientist analyzed the accuracy of technology-trend predictions. About eighty percent of them were wrong, regardless of whether those predictions were made by experts or not.

In 2005, Tetlock wrote a book about expert prediction called “Expert Political Judgment: How Good Is It? How Can We Know?” In it, he explains that not only are experts often wrong, but they’re nearly never called out on it. The New Yorker explains:

When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake.

Tetlock points out that while we’re terrible at predictions, experts fall into two “cognitive styles” when they’re making those predictions: foxes and hedgehogs. The Huffington Post summarizes:

Foxes know many things while hedgehogs know one big thing. Being deeply knowledgeable on one subject narrows one’s focus and increases confidence, but it also blurs dissenting views until they are no longer visible, thereby transforming data collection into bias confirmation and morphing self-deception into self-assurance. The world is a messy, complex, and contingent place with countless intervening variables and confounding factors, which foxes are comfortable with but hedgehogs are not. Low scorers in Tetlock’s study were “thinkers who ‘know one big thing,’ aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who ‘do not get it,’ and express considerable confidence that they are already pretty proficient forecasters.” By contrast, says Tetlock, high scorers were “thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ‘ad hocery’ that require sticking together diverse sources of information, and are rather diffident about their own forecasting prowess.”

But what about the 10,000 hours technique? Did you really just spend 10,000 hours in order to have only a slightly better than random chance at predicting the outcome of your chosen field? Probably. Barker cites another book, Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else:

Extensive research in a wide range of fields shows that many people not only fail to become outstandingly good at what they do, no matter how many years they spend doing it, they frequently don’t even get any better than they were when they started.

In field after field, when it came to centrally important skills—stockbrokers recommending stocks, parole officers predicting recidivism, college admissions officials judging applicants—people with lots of experience were no better at their jobs than those with very little experience.

The moral here? We really have no idea what’s going to happen, ever.

More from Smithsonian.com

How to Win Money Predicting the Olympics

Italian Scientists May Face Trial for Not Predicting 2009 Earthquake

Get the latest stories in your inbox every weekday.