About That Small Cap Effect: Oops!

That’s the earthshaking conclusion of Michael Edesess, writing in Advisor Perspectives.  Here’s his lead:

The supposed outperformance  of small cap stocks is a foundational precept on which many respected asset  managers have staked their expertise over the years – foremost among them,  Dimensional Fund Advisors (DFA), the famed fund company that has gained a near-religious following since they popularized small cap indexing three decades ago. A growing body of research, however, shows no such advantage for the last 30 years and, now, a new study seems to have proven that the supposed small-cap advantage may have never existed in the first place.

The paper, which appeared in September in Financial Advisor Magazine, was written by Gary A. Miller and Scott A. MacKillop of Wyoming-based Frontier Asset Management, and it began with this startling claim:

“The results show that, in the 1936-1975  period, the common stock of small firms had, on average, higher risk-adjusted returns than the common stock of larger firms.” That one sentence, which appeared in a paper by Rolf Banz published in the Journal of Financial Economics in 1981, is the foundation for the investment truism, “Small stocks  beat large stocks.” As it turns out, Banz was wrong.

And MacKillop and Miller are right, as my own analysis confirms. As we shall see, not only was Banz wrong  but so, later, were Eugene Fama and Kenneth French, as well as a number of  others who have asserted expertise on this subject.

The writer discusses the search for the elusive small cap advantage in the Miller and MacKillop paper:

The Banz paper claimed that small caps not only grew faster than the S&P 500 over the 1936-1975 period but that they beat it on a risk-adjusted basis too. This became known as the  “small cap effect.” It was reaffirmed in the 1992 Fama-French paper.

It is now widely known, however, that after the Banz paper was published in 1981, and after DFA  introduced its small cap fund, the small cap effect went the way of what I call the Schwert rule: “After they are documented and analyzed in the academic literature, anomalies often seem to disappear, reverse, or attenuate.”

Miller and MacKillop, who believe in the Schwert rule, set out to perform another check on the small cap effect in the post-1981 period to corroborate the results of other studies.  Their research did corroborate those studies – including recent investigations by Fama and French themselves – by finding no small cap alpha in that period.

Out of curiosity, and having the data at hand, they then checked out the small cap effect over the whole  period 1926-2010 – and found none.

This piqued their curiosity further, so they tried reinvestigating only the period, 1936-1975, of the Banz  paper, as well as the whole pre-Banz period for which data were available,  1926-1981.

Lo and behold: contrary to the findings of Banz, and of Fama and French, they found no small cap effect –  there was simply no superior risk-adjusted small cap performance in those  periods whatsoever.

Finally, Mr. Edesess drills down to the math error that created the small cap effect:

I taught statistics to business students for three years while I was in graduate school. One of the things you try to teach students is to remember the assumptions behind the statistical methodologies. For regression analysis, a key assumption is that the underlying distributions of the random variables are  normal.

Unfortunately, most people just ignore the assumptions altogether. This is too often the case in the financial field. Specifically, monthly returns are obviously not going to fit a normal distribution, because they are not symmetrical – they are limited to  -100% on the downside, while they are unlimited on the upside. The log-return distribution may be symmetrical, but not the distribution of the returns themselves. Miller and MacKillop do the appropriate thing by transforming all returns to log-returns so as to better fit the assumptions of the regression method. The others, as far as I can discern, do not.

Can this change in the methodology alone account for Miller and MacKillop’s different results?

Yes it can.

I performed a test to answer  this question.  Anyone who wishes to may replicate the analysis. Here are the details. I started by generating 1,000 sequences of 1,020 random monthly small cap and large stock returns (1,020 = 12  months x 85 years). In this process, I assumed that the expected annual risk premium (premium log-return) of large stocks was 6% with a standard deviation of 20%, and that the premium of small caps had a standard deviation of 35% and that its beta with large stocks was 1.5. This  implies that small caps’ expected premium was 9% (1.5 times 6%).

When I ran regressions on the resulting log-returns, the alphas averaged zero – as expected, because the data were designed that way.

But when I regressed the ordinary monthly returns, the average alpha came out about 0.25%, which annualizes to about 3% a year. This implies that Banz’s findings – and those of Fama and French as well – are spurious, the result of failing to transform the data so that they fit the assumptions of regression analysis.


That should put the nail in the coffin of the small cap effect. It’s quite possible that it calls into question the value effect as well.

Wow!  In other words, the small cap effect was a math error.  (I added the emphasis above.)  I am not a math whiz, but I understand what Mr. Edesess is saying here—the earlier studies did not use log returns consistently and that the small cap premium is a statistical artifact of that error.

That’s how science progresses. Someone throws out a hypothesis and everyone tries to disprove it. In the future we may even see papers arguing with Mr. Edesess’s math. However, I suspect he may be correct because I’ve seen this happen before—someone proves a thesis mathematically, but is not a math expert. A more accomplished mathematician points out the flaw in the math and the proof goes out the window.

The other reason that I suspect Mr. Edesess is correct is that it never made sense to me that small caps had superior risk-adjusted returns. From a common-sense standpoint, it is almost always low-volatility assets that show superior risk-adjusted returns, i.e. the absolute return is lower, but the risk-adjusted return is higher. Small caps tend to be incredibly volatile, so the excess return would have to be massive to be superior on a risk-adjusted basis. There are cycles where small caps perform very well, but there are also periods where they perform miserably.

It also points out to me the divide in finance between academics and practitioners. An academic grabs a big dataset, runs a giant regression and examines the statistical properties, whereas a practitioner is more likely to ignore the higher math and just run an equity curve of the p&l. Since the practitioner is ultimately concerned with making money, running an equity curve makes sense. That strikes academics as simple-minded but it does have the benefit of being difficult to screw up. I think this is one of those times I am glad all of our testing is simple and robust.

One Response to About That Small Cap Effect: Oops!

  1. maha piritha full

    About That Small Cap Effect: Oops! • Systematic Relative Strength • Dorsey Wright Money Management Systematic Relative Strength

Leave a Reply

Your email address will not be published. Required fields are marked *