What’s Hot…and Not

October 14, 2011

How different investments have done over the past 12 months, 6 months, and month.

1PowerShares DB Gold, 2iShares MSCI Emerging Markets ETF, 3iShares DJ U.S. Real Estate Index, 4iShares S&P Europe 350 Index, 5Green Haven Continuous Commodity Index, 6iBoxx High Yield Corporate Bond Fund, 7JP Morgan Emerging Markets Bond Fund, 8PowerShares DB US Dollar Index, 9iBoxx Investment Grade Corporate Bond Fund, 10PowerShares DB Oil, 11iShares Barclays 20+ Year Treasury Bond

Posted by:


Moneyball and Risk Aversion

October 14, 2011

The popular Michael Lewis book, Moneyball, has now been turned into a movie. Most of the fascination with the Moneyball concept has to do with using data to drive better decisions. In Moneyball, it was about using data to win baseball games, but data-driven decisions can be used to win in other sports, or in the financial markets.

Jonah Lehrer’s Frontal Cortex column discusses use of data in football—and points out the incredible resistance to using the data. The football researcher was economist David Romer of UC Berkeley. Romer’s findings were shocking.

The first thing Romer did was analyze every fourth down during the first quarter of every NFL game between 1998 and 2000. (He had help from a computer program.) Then, he figured out the fluctuating value of a first down at each point on the football field. After all, a first down was more valuable for a team if it occurred on an opponents two yard line than on their own twenty yard line. The next thing Romer calculated was the statistical likelihood of going for it on fourth down under various circumstances and actually getting a first down. He also calculated the probability of kicking a successful field goal from various spots on the field.

So let’s say you are NFL coach, and you have a fourth and three on your opponent’s 30 yard line. Romer could tell you that 1) you have a 60 percent chance of getting a first down, and that teams with 1st downs inside the thirty yard line score a touchdown 40 percent of the time, for an expected point value of 1.7 and 2) that field goal attempts from the 32 yard line failed almost 65 percent of the time, which meant that going for a field goal only had an expected point value of 1.05. In other words, it’s almost twice as effective to go for it than to attempt a field goal.

So what do most coaches do? They consistently make the wrong decision. According to Romer’s analysis, teams would have been better off going for it on fourth down during the 1st quarter on 1100 different drives. Instead, coaches decided to kick the ball 992 times. This meant that NFL coaches made the wrong decision over 90 percent of the time.

So how have coaches reacted to this data? In 2001, before Romer published his findings, the average team went for it on fourth down 15.1 times per season. During the 2010 season, the average NFL team went for it on fourth down…15.125 times. Perhaps 2011 will be the year coaches start to maximize profits. But I’m doubtful.

Clearly, publishing definitive data did not make one bit of difference in the way the coaches acted. Mr. Romer’s leading hypothesis for the utter lack of change is risk aversion.

To explain the consistently bad decisions of NFL coaches, Romer offered two different answers. The first is risk aversion. If coaches followed Romer’s strategy, they would fail about half the time they were within ten yards of the endzone. This means that instead of kicking an easy field goal and settling for three points, they would come away empty handed. Although that’s a winning strategy in the long-run, it’s hard to stomach. (As Daniel Kahneman notes, “Worst case scenarios overwhelm our probabilistic assessment, as the mere prospect of the worst case has so much more emotional oomph behind it.”) After a long drive down the field, fans expect some points. A coach that routinely disappointed the crowd would quickly get fired.

The bold is mine. People make decisions to get what they want, whether in coaching or in markets. In football, what coaches want is not to get fired! Winning the game by harnessing probability data is not necessarily at the top of their priority list. (Sports Illustrated carried an article about a high school football team that did take the statistics to heart. The team hasn’t punted since 2007 (!) and has managed to win 100 games over the last decade, including a state championship!)

Financial markets are not much different. Just as many participants are focused on career risk as on outperforming the markets. It’s easy to construct a worst case scenario about any market—heck, it seems like it comes to fruition half the time! Just the worry about a bad scenario can create all sorts of irrational behavior. If that’s true, what can be done to make Moneyball pay off in the markets?

The important thing is to let the data lead, as difficult as that is. The more systematic the approach is, the more likely it is to succeed over time. If emotions are allowed to overrule the use of carefully collected probability data, you’ve let the worst case scenario overwhelm the process. We’ve tried to design our Systematic RS accounts with that in mind. Every decision won’t be correct, but sticking with the correct process gives the best chance of success in the long run.

Posted by:


From the Archives: Why Predictions Are Often Wrong

October 14, 2011

We all know how difficult it is to get predictions right. And even when the forecaster is extremely knowledgeable about the topic—maybe even the world’s leading expert—the prediction is often wrong. Why does that happen?

In a great post about predictions, Phil Birnbaum notes, “The problem is that no matter how much you know about the price of oil, it’s random enough that the spread of outcomes is really, really wide: much wider than the effects of any knowledge you bring to the problem.” (The emphasis is mine.) In other words, the standard deviation around the mean is so huge that getting it right is simply a matter of luck.

Rather than rely on prediction (luck), we rely on our systematic process to guide our investment decisions. A systematic process is not always correct either, of course, but the decisions are made on the basis of data rather than relying on luck.

(Thanks to John Lewis for the article reference.)

—-this article originally appeared 9/23/2009. The spread of outcomes in any situation is really, really wide, and it seems especially so when politics are heavily involved in markets. The range of outcomes for the peripheral European debt problem, for example, is mind-boggling. You’re better off sticking to the data than going with an unreliable forecast.

Posted by:


Sector and Capitalization Performance

October 14, 2011

The chart below shows performance of US sectors and capitalizations over the trailing 12, 6, and 1 month(s). Performance updated through 10/13/2011.

Posted by: