Underperformance

September 26, 2013

Whether you are an investment manager or a client, underperformance is a fact of life, no matter what strategy or methodology you subscribe to.  If you don’t believe me, take a look at this chart from an article at ThinkAdvisor.

Source: Morningstar, ThinkAdvisor  (click on image to enlarge)

Now, this chart is a little biased because it is looking at long periods of underperformance—3-year rolling periods—from managers that had top 10-year track records.  In other words, these are exactly the kinds of managers you would hope to hire, and even they have long stretches of underperformance.  When things are going well, clients are euphoric.  Clients, though, often feel like even short periods of underperformance mean something is horribly wrong.

The entire article, written by Envestnet’s J. Gibson Watson, is worth reading because it makes the point that simply knowing about the underperformance is not very helpful until you know why the underperformance is occurring.  Some underperformance may simply be a style temporarily out of favor, while other causes of underperformance might suggest an intervention is in order.

It’s quite possible to have a poor experience with a good manager if you bail out when you should hang in.  Investing well can be simple, but that doesn’t mean it will be easy!

Posted by:


Investment Manager Selection

April 12, 2013

Investment manager selection is one of several challenges that an investor faces.  However, if manager selection is done well, an investor has only to sit patiently and let the manager’s process work—not that sitting patiently is necessarily easy!  If manager selection is done poorly, performance is likely to be disappointing.

For some guidance on investment manager selection, let’s turn to a recent article in Advisor Perspectives by C. Thomas Howard of AthenaInvest.  AthenaInvest has developed a statistically validated method to forecast fund performance.  You can (and should) read the whole article for details, but good investment manager selection boils down to:

  • investment strategy
  • strategy consistency
  • strategy conviction

This particular article doesn’t dwell on investment strategy, but obviously the investment strategy has to be sound.  Relative strength would certainly qualify based on historical research, as would a variety of other return factors.  (We particularly like low-volatility and deep value, as they combine well with relative strength in a portfolio context.)

Strategy consistency is just what it says—the manager pursues their chosen strategy without deviation.  You don’t want your value manager piling into growth stocks because they are in a performance trough for value stocks (see Exhibit 1999-2000).  Whatever their chosen strategy or return factor is, you want the manager to devote all their resources and expertise to it.  As an example, every one of our portfolio strategies is based on relative strength.  At a different shop, they might be focused on low-volatility or small-cap growth or value, but the lesson is the same—managers that pursue their strategy with single-minded consistency do better.

Strategy conviction is somewhat related to active share.  In general, investment managers that are willing to run relatively concentrated portfolios do better.  If there are 250 names in your portfolio, you might be running a closet index fund.  (Our separate accounts, for example, typically have 20-25 positions.)  A widely dispersed portfolio doesn’t show a lot of conviction in your chosen strategy.  Of course, the more concentrated your portfolio, the more it will deviate from the market.  For managers, career risk is one of the costs of strategy conviction.  For investors, concentrated portfolios require patience and conviction too.  There will be a lot of deviation from the market, and it won’t always be positive.  Investors should take care to select an investment manager that uses a strategy the investor really believes in.

AthenaInvest actually rates mutual funds based on their strategy consistency and conviction, and the statistical results are striking:

The  higher the DR [Diamond Rating], the more likely it will outperform in the future. The superior  performance of higher rated funds is evident in Table 1. DR5 funds outperform DR1 funds by more than  5% annually, based on one-year subsequent returns, and they continue to deliver  outperformance up to five years after the initial rating was assigned. In this  fashion, DR1 and DR2 funds underperform the market, DR3 funds perform at the  market, and DR4 and DR5 funds outperform. The average fund matches market  performance over the entire time period, consistent with results reported by  Bollen and Busse (2004), Brown and Goetzmann (1995) and Fama and French  (2010), among others.

Thus,  strategy consistency and conviction are predictive of future fund performance  for up to five years after the rating is assigned.

The bold is mine, as I find this remarkable!

I’ve reproduced a table from the article below.  You can see that the magnitude of the outperformance is nothing to sniff at—400 to 500 basis points annually over a multi-year period.

Source: Advisor Perspectives/AthenaInvest   (click on image to enlarge)

The indexing crowd is always indignant at this point, often shouting their mantra that “active managers don’t outperform!”  I regret to inform them that their mantra is false, because it is incomplete.  What they mean to say, if they are interested in accuracy, is that “in aggregate, active managers don’t outperform.”  That much is true.  But that doesn’t mean you can’t locate active managers with a high likelihood of outperformance, because, in fact, Tom Howard just demonstrated one way to do it.  The “active managers don’t outperform” meme is based on a flawed experimental design.  I tried to make this clear in another blog post with an analogy:

Although I am still 6’5″, I can no longer dunk a basketball like I could in college.  I imagine that if I ran a sample of 10,000 random Americans and measured how close they could get to the rim, very few of them could dunk a basketball either.  If I created a distribution of jumping ability, would I conclude that, because I had a large sample size, the 300 people would could dunk were just lucky?  Since I know that dunking a basketball consistently is possible–just as Fama and French know that consistent outperformance is possible–does that really make any sense?  If I want to increase my odds of finding a portfolio of people who could dunk, wouldn’t it make more sense to expose my portfolio to dunking-related factors–like, say, only recruiting people who were 18 to 25 years old and 6’8″ or taller?

In other words, if you look for the right characteristics, you have a shot at finding winning investment managers too.  This is valuable information.  Think of how investment manager selection is typically done:  “What was your return last year, last three years, last five years, etc.?”  (I know some readers are already squawking, but the research literature shows clearly that flows follow returns pretty closely.  Most “rigorous due diligence” processes are a sham—and, unfortunately, research shows that trailing returns alone are not predictive.)  Instead of focusing on trailing returns, investors would do better to locate robust strategies and then evaluate managers on their level of consistency and conviction.

Posted by:


Moving Averages and RS…By the Numbers

June 22, 2012

Investors frequently rely on market indicators, such as moving averages, to decide when to buy, sell, or hold a stock.  In fact we hear all the time of the magical powers of the moving average indicator, which has the mystical capabilities of keeping you out of trouble during market downturns, while making sure you are along for the ride on any rallies.

Therefore, we decided to test performance of Ken French’s High Relative Strength Index (an explanation of this index can be found here) against 50 and 200 day moving averages.  We’ve calculated returns based on the assumption that the investor buys or holds when the price of the RS stock is above the moving average, and sells when the price drops below the moving average.  So when the index is above its 50-, or 200-day moving average, we are fully invested, and when it’s below, we are out of the index.

Chart 1: Returns from 1963-2012.  During this time period, basing buy and sell decisions off of the 50 day moving average is more successful than being fully invested.  It is important to keep in mind that this data includes the bear markets of the 1970s and 2000s.

http://i563.photobucket.com/albums/ss73/dorseydwa/MovingAverage-19752007-1.png

Chart 2: Returns from 1975-2007.  When we start at a different point in time, the 50 day moving average performs much more poorly.  In this dataset, we’ve cut out two large bear markets, and the effect on returns is drastic.  In this case, it would have been better to just buy and hold.

Table 1: Annualized Returns by Time Periods.  The average annualized returns also vary based on the period of time measured.  At certain times, following moving averages outperforms being fully invested; but in other periods the opposite is true.  Check out the difference between the two periods of ’83-’00 and ’66-’82.  Using a moving average can either make or break your returns.

Charts 3 and 4: Fully Invested Ken French – Use of 50 Day MA (5 and 10 Year Performance).  Investment performance based on moving averages varies greatly over time.  In some periods, it performs incredibly, while in others it does terribly.

The performance of moving average based investment is directly related to the time period in which it is measured.  As shown in Table 1, the returns can be completely different even in periods that partially overlap.  The question then becomes not whether or not to use a moving average, but when!  If you can predict the future, you’ll easily be able to decide whether or not to use a moving average when holding an index.   

Posted by:


Relative Strength, Decade by Decade

June 5, 2012

This post explores relative strength success by decade, dating back to the 1930s.  Once again, we’ve used the Ken French data library and CRSP database data.  You can click here  for a more complete explanation of this data.

Chart 1: Percent Outperformance by Decade.  This chart shows the number of years in which relative strength has outperformed the CRSP universe each decade.  RS outperformance has occurred in at least half of all years each decade.

Chart 2: Average 1-Year Performance by Decade.  This chart shows the average yearly growth by decade of a relative strength portfolio and of the CRSP universe.  Each decade, the average performance of relative strength has been greater than the average performance of the CRSP universe.  Generally speaking, when the market’s average performance is increasing, RS outperforms CRSP by a greater percentage than it does when the market is doing poorly.

In short, relative strength has been a durable return factor for a very long time.

Posted by:


The Art of Doing Nothing

June 5, 2012

The Wall Street Journal had a fascinating article over the weekend on a training simulation for pension plan trustees.  Teams compete with one another, with advice and guidance from employees of Brandes Investment Partner, the developers of the simulation.  What participants should focus on—and what they do focus on—are often two different things.

What the participants should focus on, [Brandes research analyst Nick Magnuson] says, are the results over longer periods and the information they have about the people, philosophy and processes at the 13 hypothetical money-management firms. In most cases, long-term performance is “a byproduct” of those aspects, Mr. Magnuson explains, while short-term results can be “noisy” rather than predictive.

Yet, the trustees playing the simulation often find that it’s hard to resist a manager on a hot streak—and it’s tempting to dump a long-term winner in a slump. Typically, when Brandes conducts what it calls its Manager Challenge, at least one-third of the teams pick managers based on three-year records, says Barry Gillman, a consultant to Brandes who previously was head of the firm’s portfolio strategies group. “The ingrained patterns are too hard to break,” he says.

The key to success, as it is so often, is being thoughtful about your decisions and then sticking with them.

Participants in these investing simulations, as in the real world, tend to trade too much, the Brandes officials say. Last month, some teams made 10 trades a round. By contrast, the winning team made a total of just five trades after picking its initial portfolio—the fewest in the game.

Sometimes even less trading has paid off. At a few contests in the past, the Brandes folks saw teams select their initial portfolios, slip out of the room to spend their time elsewhere, and come back to find themselves the winners. “We don’t really want people to figure that out” and miss out on the full experience, Mr. Gillman says. “But the reality is many of them would really be better off doing that.”

Winning by doing nothing should be a big lesson to all investors.  Select your managers carefully based on people, philosophy, and process (we happen to like relative strength)—and then leave it alone.  Assuming the people haven’t turned over and the philosophy and process are unchanged, that should be simple to do.  All too often, however, it is not done.

Look at it this way: financial markets are going to bounce up and down no matter what managers you select.  Sometimes markets will be smooth for extended periods; at other times they will be frustrating and turbulent.  Again, this will occur regardless of the managers you select.  You cannot let your confidence in the process be derailed by the inevitable bumps in the market.

There is a fine art to doing nothing.  Resisting the urge to tinker once your due diligence is complete actually requires a conscious decision not to intervene at each temptation.  It’s harder than it looks—and that’s often the difference between winning and losing.

Posted by:


Relative Strength vs. Value – Performance Over Time

May 31, 2012

Thanks to the large amount of stock data available nowadays, we are able to compare the success of different strategies over very long time periods. The table below shows the performance of two investment strategies, relative strength (RS) and value, in relation to the performance of the market as a whole (CRSP) as well as to one another. It is organized in rolling return periods, showing the annualized average return for periods ranging from 1-10 years, using data all the way back to 1927.

The relative strength and value data came from the Ken French data library. The relative strength index is constructed monthly; it includes the top one-third of the universe in terms of relative strength.  (Ken French uses the standard academic definition of price momentum, which is 12-month trailing return minus the front-month return.)  The value index is constructed annually at the end of June.  This time, the top one-third of stocks are chosen based on book value divided by market cap.  In both cases, the universes were composed of stocks with market capitalizations above the market median.

Lastly, the CRSP database includes the total universe of stocks in the database as well as the risk-free rate, which is essentially the 3-month Treasury bill yield. The CRSP data serves as a benchmark representing the generic market return. It is also worthwhile to know that the S&P 500 and DJIA typically do worse than the CRSP total-market data, which makes CRSP a harder benchmark to beat.

 

Source:Dorsey Wright Money Management

The data supports our belief that relative strength is an extremely effective strategy. In rolling 10-year periods since 1927, relative strength outperforms the CRSP universe 100% of the time.  Even in 1-year periods it outperforms 78.6% of the time. As can be seen here, relative strength typically does better in longer periods. While it is obviously possible do poorly in an individual year, by continuing to implement a winning strategy time and time again, the more frequent and/or larger successful years outweigh the bad ones.

Even more importantly, relative strength typically outperforms value investment. Relative strength defeats value in over 57% of periods of all sizes, doing the best in 10-year periods with 69.3% of trials outperforming. While relative strength and value investment strategies have historically both generally beat the market, relative strength has been more consistent in doing so.

Posted by:


From the Archives: The Math Behind Manager Selection

May 31, 2012

Hiring and firing money managers is a tricky business.  Institutions do it poorly (see background post here ), and retail investors do it horribly (see article on DALBAR  ).  Why is it so difficult?

This white paper on manager selection from Intech/Janus goes into the mathematics of manager selection.  Very quickly it becomes clear why it is so hard to do well.

Many investors believe that a ten-year performance record for a group of managers is sufficiently long to make it easy to spot the good managers. In fact, it is unlikely that the good managers will stand out.  Posit a good manager whose true average relative return is 200 basis points (bps) annually and true tracking error (standard deviation of relative return) is 800 bps annually. This manager’s information ratio is 0.25. To put this in perspective, an information ratio of 0.25 typically puts a manager near or into the top quartile of managers in popular manager universes.

Posit twenty bad managers with true average relative returns of 0 bps annually, true tracking error of 1000 bps annually, hence an information ratio of 0.00.

There is a dramatic difference between the good manager and the bad managers.

The probability that the good manager beats all twenty bad managers over a ten-year period is only about 9.6%.  This implies that chasing performance leaves the investor with the good manager only about 9.6% of the time and with a bad manager about 90.4% of the time.

In other words, 90% of the time the manager with the top 10-year track record in the group will be a bad manager!  Maybe a longer track record would help?

A practical approach is to ask how long a historical performance record is necessary to be 75% sure that the good manager will beat all the bad managers, i.e., have the highest historical relative return. Assuming the same good manager as before and twenty of the same bad managers as before, a 157 year historical performance record is required to achieve a 75% probability that the good manager will beat all the bad managers.

It turns out that it would help, but since none of the manager databases have 150-year track records, in practice it is useless.  The required disclaimer that past performance is no guarantee of future results turns out to be true.

There is still an important practical problem to be solved here.  Assuming that bad managers outnumber good ones and assuming that we don’t have 150 years to wait around for better odds, how can we increase our probability of identifying one of the good money managers?

The researchers show mathematically how combining an examination of the investment process with historical returns makes the decision much simpler.  If the investor can make a reasonable assumption about a manager’s investment process leading to outperformance, the math is straightforward and can be done using Bayes’ Theorem to combine probabilities.

…the answer changes based on the investor’s assessment of the a priori credibility of the manager’s investment process.

It turns out that the big swing factor in the answer is the credibility of the underlying investment process.  What are the odds that an investment process using Fibonacci retracements and phases of the moon will generate outperformance over time?  What are the odds that relative strength or deep value will generate outperformance over time?

The research paper concludes with the following words of wisdom:

A careful examination of almost any investor’s investment manager hiring and firing process is likely to reveal that there is a substantial component of performance chasing. Sometimes it is obvious, e.g., when there is a policy of firing a manager if he has negative performance after three years. Other times it is subtle, e.g., when the initial phase of the manager search process strongly weights attractive historical performance. No matter the form that performance chasing takes, it tends to produce future relative returns that are disappointing compared to expectations.

Historical performance alone is not an effective basis for identifying a good manager among a group of bad managers. This does not mean that historical performance is useless. Rather, it means that it must be combined efficiently with other information. The correct use of historical performance relegates it to a secondary role. The primary focus in manager choice should be an analysis of the investment process.  [emphasis added]

This research paper is eye-opening in several respects.

1) It shows pretty clearly that historical performance alone–despite what our intuition tells us–is not sufficient to select managers.  This probably accounts for a great deal of the poor manager selection, the subsequent disappointment, and rapid manager turnover that goes on.

2) It is very clear from the math that only credible investment processes are likely to generate long-term outperformance.  Fortunately, lots of substantive academic and practitioner research has been done on factor analysis leading to outperformance.  The only two broadly robust factors discovered so far have been relative strength and value, both in various formulations–and, obviously, they have to be implemented in a disciplined and systematic fashion.  If your investment process is based on something else, there’s a decent chance you’re going to be disappointed.

3) Significant time is required for the best managers to stand out from the much larger pack of mediocre managers.

This is a demanding process for consultants and clients.  They have to willfully reduce their focus on even 10-year track records, limit their selection to rigorous managers using proven factors for outperformance, and then exercise a great deal of patience to allow enough time for the cream to rise to the top.  The rewards for doing so, however, might be quite large–especially since almost all of your competition will ignore the correct process and and simply chase performance.

—-this article originally appeared 1/28/2010.  I have seen no evidence since then that most consultants have improved their manager selection process, which is a shame.

Posted by: