We admit it. Pundits who make market forecasts with lots of articulate reasons sound much smarter than we do.
Our business model is simple: buy what is strong and hold it until it becomes weak. Although I admit that doesn’t strike anyone as overly clever, let’s consider the odds behind making predictions. The best-known study on the accuracy of pundits was done by Philip Tetlock at the University of California, Berkeley. The Wall Street Journal recently carried an article (wonderfully written by Jonah Lehrer, author of How We Decide, a book I highly recommend) about his investigations right before the recent election. Dr. Tetlock first got interested in predictions in the runup to the 1984 presidential election. He starting tracking pundits to see who would be right. And here’s what he found:
Mr. Tetlock began to monitor their predictions, and a few years later, he came to a sobering conclusion: Everyone was wrong.
Of course, many of the soothsayers later claimed to have predicted everything that happened! Dr. Tetlock’s investigation turned into a 20-year obsession.
The dismal performance of the experts inspired Mr. Tetlock to turn his case study into an epic experimental project. He picked 284 people who made their living “commenting or offering advice on political and economic trends,” including journalists, foreign policy specialists, economists and intelligence analysts, and began asking them to make predictions. Over the next two decades, he peppered them with questions: Would George Bush be re-elected? Would apartheid in South Africa end peacefully? Would Quebec secede from Canada? Would the dot-com bubble burst? In each case, the pundits rated the probability of several possible outcomes. By the end of the study, Mr. Tetlock had quantified 82,361 predictions.
More than 82,000 quantified predictions, I think, counts as a statistically valid sample size. There was really only one problem. Unfortunately, it was a rather large problem.
How did the experts do? When it came to predicting the likelihood of an outcome, the vast majority performed worse than random chance. In other words, they would have done better picking their answers blindly out of a hat. Liberals, moderates and conservatives were all equally ineffective. Although 96% of the subjects had post-graduate training, Mr. Tetlock found, the fancy degrees were mostly useless when it came to forecasting.
The main reason for the inaccuracy has to do with overconfidence. Because the experts were convinced that they were right, they tended to ignore all the evidence suggesting they were wrong. This is known as confirmation bias, and it leads people to hold all sorts of erroneous opinions. Famous experts were especially prone to overconfidence, which is why they tended to do the worst. Unfortunately, we are blind to this blind spot: Most of the experts in the study claimed that they were dispassionately analyzing the evidence. In reality, they were indulging in selective ignorance, as they explained away dissonant facts and contradictory data. The end result, Mr. Tetlock says, is that the pundits became “prisoners of their preconceptions.” And their preconceptions were mostly worthless.
The problems with predictions are manifold. 1) Experts have preconceptions, 2) experts have confirmation bias, and 3) experts are blind to their blind spot. Post-graduate degrees and fame didn’t help. In fact, prominent experts tended to do the worst. And, frankly, it’s not just experts who are blind to their blind spot—we all are. Our mental software is just built that way.
Why not admit to the blind spot and go with a method that ignores your own preconceptions? Systematic trend following with relative strength is just a recognition of what forecasters refuse to admit: their predictions are worthless. According to Dr. Tetlock’s data, you would have a better track record if you flipped a coin. Although systematic trend following doesn’t earn style points, it can be quite profitable. (It may even earn anti-style points. One of our colleagues was once referred to as a “trend following moron.”) The next time you hear a prediction on CNBC, cover your ears and just look at the trend.
Hat tip to NS and DL.
Posted by Mike Moody