A Global Passive Benchmark with ETFs and Factor Tilts


“One way to test our thinking would be to ask the question in reverse: If your index manager reliably delivered the full market return with no more than market risk for a fee of just 5 bps, would you be willing to switch to active performance managers who charge exponentially more and produce unpredictably varying results, falling short of their chosen benchmarks nearly twice as often as they outperform—and when they fall short, losing 50% more than they gain when they outperform? The question answers itself.” – Charles Ellis, “The Rise and Fall of Performance Investing

Passive Aggressive

In a recent article published in Financial Analysts Journal, Charles Ellis makes an excellent case for the death of active management. Ellis asserts that the efficiency of a market is a function of the number and quality of active, informed investors at work in the market at any time. As more investors with increasingly deep educational backgrounds armed with mountains of data and obscene amounts of computational horsepower enter the market seeking inefficiencies, they will eventually eliminate all of the inefficiencies they so diligently pursue.

Plenty of literature supports this view. Ellis himself cites a seminal study by Fama which concluded that,

“Active management in aggregate is a zero-sum game—before costs. . . . After costs, only the top 3% of managers produce a return that indicates they have sufficient skill to just cover their costs, which means that going for- ward, and despite extraordinary past returns, even the top performers are expected to be only about as good as a low-cost passive index fund. The other 97% can be expected to do worse.”

Two recent studies (here and here) by Blake et. al. sponsored by the Pensions Institute at Cass Business School in London further bolster the results from Fama.  They applied a more rigorous methodology called bootstrapping, which allowed the authors to compare actual mutual fund returns to a distribution of returns which might have been expected purely as a result of random chance. Their results are in Figure 1.

Figure 1.

Blake_Bootstrap_Chart

To interpret this chart note the green and blue curves. The blue curve charts the results of the robust bootstrap test, and describes the distribution of returns that would be expected purely due to random chance. The green curve describes the observed distribution of results for mutual funds which were active during the entire period 1998 – 2008. The blue dotted vertical lines bookend the 5th and 95th percentile performance (measured as the t-score of alpha) which might have been expected from random chance. Note that the green line is to the left of the blue line over the entire distribution, leading Blake to remark:

“…there is no evidence that even the best performing mutual fund managers can beat the benchmark when allowance is made for the costs of fund management.”

In fact, the authors conclude that on average, investors would accrue an extra 1.44% per year in alpha from investing in passive benchmarks. We would encourage more technical readers to refer to section 2.2 in Blake for a more detailed explanation of the bootstrap methodology.

Interestingly, the authors also studied the impact of mutual fund size on performance, and found that smaller funds outperform larger funds. In fact, this is a very economically significant effect. Specifically, Blake et. al found that a doubling in fund assets results in an average 0.9% per year reduction in fund alpha.

Let’s think about these two facts for a second. First, there is no evidence that any mutual fund managers outperform after accounting for fees and luck effects. Second, larger funds lag smaller funds. How might this help to explain the chronic and egregious underperformance of private investors described by perennial Dalbar studies, per Figure 2 (average private investor returns in red)?

Individual_vs_Asset_Classes_Bernstein

Certainly there are many factors that have contributed to this dismal reality, such as performance chasing behaviour, poor advice, and emotionally driven decision making. That said, retail investors very often gravitate toward, or are directed into, behemoth funds operated by large, well-known investment firms. Perhaps investors (and Advisors) feel that a large institution with a long history is more likely to have investors’ best interests at heart. Almost certainly there is a feeling of ‘safety in numbers’; as Keynes famously said, “Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally”. The sad reality, however, is that most investors, chasing the wrong kinds of funds on the basis of precisely wrong evaluation methods, will continue to fall far short of their goals.

But I digress. The point is, Ellis claims active management is a mug’s game, and the research strongly supports this view. And this fact is complicated further by that fact that, while some managers will inevitably outperform in any given period purely as a result of good luck, it is virtually impossible to identify these managers in advance. Worse, traditional methods of selecting managers based on 3 to 5 year track records are a near certain recipe for disaster. Figure 3 describes the proportion of institutions which evaluate and terminate managers at various horizons. Observe that while most institutions evaluate managers on a quarterly basis, they base termination decisions on 3 to 5 year evaluation periods. Yet, as Figure 4. makes clear, managers that are fired, presumably because of poor 3 to 5 year performance, go on to outperform replacement managers over the next 1, 2, and 3 year periods.

Figure 3. Proportion of institutions that evaluate and terminate managers at various horizons.

Source: Employee Benefit Research Institute

Figure 4. Excess returns to terminated and newly hired managers in the 3 years prior to, and subsequent to, termination

Source: Goyal and Wahal, 2008

Whatever method these institutions – and their consultant advisors – are using to evaluate, terminate and hire managers, it doesn’t appear to work very well on a 3 to 5 year evaluation period. Here we have a situation where the vast majority of active managers underperform, exacerbated by the fact that the managers who are expected to outperform typically go on to underperform the managers who are expected to underperform. Quite a conundrum.

As I was writing this section, a new paper from Vanguard hit my inbox which further bolsters the point that chasing top performing managers is a surefire way to underperform. Figure 5 from the paper compares the results of two simulations for each traditional mutual fund ‘style box’. First, the authors randomly selected each year from 2004 to 2013 a portfolio of funds from the universe of funds in each style box. They performed this procedure many times to generate a distribution of performance across all possible portfolios during the period. Next they simulated a ‘performance chasing’ portfolio by randomly selecting from only the top performing funds over the previous three year period. They chose this evaluation horizon because this is the typical mutual fund holding period.

Figure 5. Distribution of returns for all funds vs. performance chasing strategy by style box, 2004 – 2013

Vanguard_Perf_Chasing_Distributions

It’s easy to see that, in every style box, top mutual funds by three year returns underperformed the average mutual fund by a wide margin: about 12 Sharpe points on average. That’s a lot of Sharpe points when average Sharpe is about 0.4. In the context of these seemingly insurmountable hurdles for active management Ellis advises that, “…investors would benefit by switching from active performance investing to low-cost indexing.” It’s tough to argue with this conclusion. Unfortunately however, this raises as many questions as it answers.

So Now What?

While Ellis’ prescription to eschew active management for low-cost indexing appears to solve some important problems, his article falls remarkably short on how to implement such an approach. He seems to favour low-cost Exchange Traded Funds as the most appropriate instruments to gain exposure to passive returns. However, the reader is left to determine how best to assemble such instruments to meet client goals.

I sought answers in the 6th edition of Ellis’ book, Winning the Loser’s Game, which has an introduction from none other than Yale CIO legend David Swensen, and echoes many of the themes David has trumpeted over the years. This is unsurprising because Charles served on the Yale endowment board for many years alongside David.

After a thorough read, I was still flummoxed. Ellis cites a great deal of data on the long-run performance of passive strategies, and even more data on the failure of active management, but he offers no meaningful prescriptions for implementation. Instead, he implores investors to get educated about estate planning and the fundamentals of asset allocation, and to take charge of their own affairs. This is undoubtedly excellent advice.

Advisors can play a role in what Ellis calls, “values discovery”, which is, “the process of determining each client’s realistic objectives with respect to various factors—including wealth, income, time horizon, age, obligations and responsibilities, investment knowledge, and personal financial history—and designing the appropriate strategy.”

Again, we support this conclusion, and Advisors do not always take this part of their role as seriously as they should. Certainly, each client should be thoroughly advised in the context of their objectives and constraints. But it is not obvious how to link Ellis’ vision of a purely passive approach to the idea of custom advice, and commensurately a custom asset allocation. Our inclination would be to invoke the Capital Market Line which would acknowledge the existence of one optimal portfolio, where risk is scaled up and down by introducing cash or leverage.

The vast majority of investable assets for both private individuals and institutions is ‘long-term money’, with a time horizon in excess of five years. This kind of capital will generally benefit from full exposure to a diversified portfolio of risky assets in order to maximize the opportunity for excess returns above what might be earned from cash. The question is, what might this portfolio look like?

The Only True Passive Benchmark: The Global Market Portfolio

In 1964, Bill Sharpe demonstrated that, at equilibrium, the portfolio which promises the greatest excess return per unit of risk is the Global Market Portfolio, which is composed of all risky assets in proportion to their market capitalization. Many investors will be familiar with this concept from their experience with market cap weighted indexes like the S&P 500. These are the ultimate passive investments within an asset class. However, it is not as obvious how to apply this concept across asset classes.

Importantly, since the Global Market Portfolio represents the aggregate holdings of all investors, it is the only true passive strategy. It is also the truest expression of faith in efficient markets. All other portfolios, including the ubiquitous 60/40 ‘balanced’ portfolio of (mostly domestic) stocks and bonds, represent very substantial active bets relative to this global passive benchmark.

Doeswijk et. al. recently published a paper on the evolution of the global multi-asset portfolio, where they examined the relative dollar proportions of all financial assets around the world from 1959 through 2012. There were roughly $90.6 trillion in tradeable financial assets globally as of the end of 2012, divided up as described in Figure 6.

Figure 6. The Global Market Portfolio, 2012

Global_Market_Portolio

Source: : Doeswijk, Ronald Q. and Lam, Trevin W. and Swinkels, Laurens, The Global Multi-Asset Market Portfolio 1959-2012 (January 2014). Financial Analysts Journal

You will note that bonds represent about 55% of total financial assets while equity-like assets represent 45%. It’s well documented that Private Equity is just equity and real estate with a lag factor; furthermore, unless you are an Ivy League school endowment, or a member of the global elite, you don’t have access to quality private equity, so you might as well assume it doesn’t exist. We also wondered whether the authors include infrastructure investments under equity, and whether there is a place for commodities, though they aren’t strictly a financial asset. But in our opinion, this framework is 99% complete.

An ETF Proxy Global Liquid Market Portfolio

The proliferation of ultra low-cost index tracking mutual funds and Exchange Traded Funds (ETFs) makes it easier than ever for private and institutional investors alike to express a global passive bet via the Global Market Portfolio. Figure 7. illustrates our best effort at recreating the proportional exposures described in Figure 6 with liquid ETFs.

Figure 7.

Global_Market_Portfolio_ETFsa

It should be simple to link the allocations in Figure 7 with the allocations in Figure 6. The one exception relates to Private Equity, which we have subsumed into roughly equal allocations to equities and real estate. Note that the total annual Management Expense Ratio (MER) for this portfolio on a weighted average basis is under 30 basis points, or 0.3%, and ETF MERs are dropping all the time.

It’s interesting to note that this portfolio requires no rebalancing because the weights will drift according to the relative performance of each asset class. However, a passive investment in these ETFs will not account for relative issuance and retirement of securities. This has a large impact on weights over longer periods, so investors will need to consult the literature periodically to ensure weights are still aligned. That said, this portfolio has the lowest theoretical turnover of any portfolio.

While the Global Market Portfolio is the only true passive benchmark, there are some simple ways to improve on the concept without introducing traditional forms of active management.

An ETF Proxy Global Market Portfolio with Factor Tilts

Even the most ardent believers in efficient markets acknowledge the existence of persistent risk factors which give rise to returns in excess of what is achievable from a purely market capitalization based benchmark. While enthusiastic finance PhDs and practitioners have identified hundreds of possible equity anomalies, only three stand up to rigorous statistical scrutiny (see here and here): value, momentum, and low beta (or low volatility) [Note: the illiquidity premium is also significant, but for obvious reasons is not very investable.] The so-called SMB or ‘size’ premium was discredited many years ago for US stocks (see here), and no evidence exists for this anomaly outside US stocks (see here). That said, small-cap value shows enduring promise.

Table 1. Historical Equity Factor Premia

Robeco_Equity_Factor_Premia

Table 1 from Robeco shows the historical returns to these equity market factor premiums. A statistically significant anomaly might be expected to deliver 2 or 3% alpha per year; given that 30% of the portfolio is exposed to factor tilts, investors might expect 0.6 – 0.9% per year in excess returns. The MER of the portfolio is 0.35%, so this would essentially cover fees, plus a little extra. Furthermore, the diversification properties among the assets and factors might be expected to lower volatility by 0.25% to 0.5%, so the boost to risk adjusted performance from this portfolio could be meaningful, at least in the context of a passive framework.

To our knowledge, bond anomalies are fewer in number, and only two types of risk offer persistent excess returns: duration and credit. Duration risk is simply the risk of lending money at a fixed rate for a longer period, and the empirical evidence is weak for any material premium above maturities of about 10 years. Rather, the best we can say is that longer duration bonds outperform during declining inflation regimes while shorter duration bonds outperform during rising inflation regimes. Hardly a consistent anomaly. Credit risk is the return that investors demand in order to be compensated for the risk of bond default. After accounting for default risk and recoveries, the only credit spread with a significant positive risk premium is the BBB-AAA spread, also called the ‘Crossover premium’.

It is a relatively simple task to assemble the equity factor exposures to approximate the market-cap and geographical distribution of the global market portfolio. Figure 8 is an attempt to do just that.

Figure 8. ETF Proxy Global Market Portfolio with Factor Tilts

Global Market Factor Tilt Portfolioa

At the margin, it would be advantageous to hold a diversified exposure to commodities. However, there is little evidence that commodities exhibit a positive risk premium over the long-term. Rather than passive commodity exposure, sophisticated investors might contemplate a 5% strategic investment in CTAs. These funds have positive expectancy, largely because they harness the momentum factor across assets, but their real strength is structural diversification. This class of investment is really the only alternative asset class (except short equity) with persistent negligible correlation to equities. They also tend to deliver their strongest performance during equity bear markets, making them a compelling tail hedge.

The Global Market Model would almost certainly be further improved by the introduction of systematic factor exposures across asset classes as well as within them, as part of a Global Tactical Asset Allocation overlay. For example, there are well documented value and momentum factors which might be systematically applied as a portable alpha strategy to improve absolute and risk-adjusted returns, as described in Table 2 from Asness, Moskowitz and Pedersen (2013) (see also here). The statistical significance of these systematic tactical alpha premiums is actually higher than what is observed among analogous equity factors, so if you acknowledge one there is no logical reason why you wouldn’t adopt both.

Table 2. Global Tactical Asset Allocation Momentum and Value Return Premia

Asness_Asset_Class_Tactical_Alphas

Source: Asness, Moskowitz, and Pedersen: Value and Momentum Everywhere (2013)

Table 2 illustrates that simple systematic exposures to momentum and value factors across asset classes have delivered 2.6% and 2.9% annualized alphas (t-scores > 3), respectively  over the past 40 years. Furthermore, these factors are excellent mutual diversifiers at the portfolio level, offering the opportunity to further lower aggregate risk. There is little doubt that institutions and private investors alike would benefit from these kinds of tactical alpha overlays, especially in today’s low-yield environment.

In summary, investors are starting to acknowledge the overwhelming evidence that active security selection is a loser’s game. This realization has caused a massive exodus from traditional mutual funds and Separately Managed Accounts and into passive Exchange Traded Funds. Investors who choose to follow this trend face a new set of challenges related to the expression of a passive view in their asset allocation. The Global Market Portfolio represents the most coherent expression of this view, and any deviation from this portfolio represents an active bet. Thus most investors who think they are passive are actually active; worse, they are making large concentrated bets unintentionally.

A thoughtful conception of the Global Market Portfolio would seek ways to gain exposure to the most persistent systematic market anomalies, while preserving the core capitalization and geographic exposures of the original model. Excess returns from factor exposures might net investors an extra 0.25% to 0.5% per year, with slightly lower risk.

In our opinion, the Global Market Portfolio with Factor Tilts represents the ultimate passive policy portfolio benchmark for institutions and private investors alike, as it represents the average expectations of all participants in markets. It should be the starting point for most long-term investment policies, and investors should thoroughly question the merits of any deviation in the absence of a carefully scrutinized and statistically significant long-term edge.





Setting Expectations for Monthly Trading Systems


Systematic researchers overwhelmingly use monthly holding periods to test strategies. This is probably driven by the availability of long-term monthly total return data for a wide variety of indexes, where daily data is more scarce. This is fine to a point, but investors may not be aware of just how sensitive results might be to day-of-the-month effects which may not persist out of sample.

I admit that until a couple of years ago we failed to account for these effects as well. While we use daily data for testing, we embraced the monthly rebalancing convention for easy comparisons against other published strategies, and for parsimonious prototyping. Only after we validated a method using monthly rebalancing did we take the more time consuming step of exploring it at a daily frequency.

However we discovered that results at monthly trade frequencies are often misleading. Moreover, this effect appears to be especially problematic for momentum based systems, with slightly less troubling results for moving average approaches (see here). Results observed when trades were executed on the first day of the month based on signals generated from closing prices on the last day of the month were often higher than those observed when signals and trades were generated on other days in the month. This is alarming because return and volatility assumptions for a system might be substantially over- or understated based on lucky or unlucky trade dates in historical testing.

Figure 1. shows the performance for simple top 2 and top 3 global asset class momentum strategies (6 month lookback period) traded on different days of the month. Trades were executed at the close of the day after signals were generated (t+1). We rotated among the following 10 assets, which we extended back to 1995 using index data: DBC,EEM,EWJ,GLD,IEF,IYR,RWX,TLT,VGK and VTI. Assets were held in equal weight for simplicity.

Figure 1. Performance of top 2 and 3 asset momentum systems traded on each day of the month

Day_of_Month_Charts

Source: Data from Bloomberg

Visual inspection might suggest the best trade day of the month for maximum CAGR to be the 11th for both 2 asset and 3 asset systems. Perhaps not surprisingly then, the 11th also delivers the highest Sharpe ratio. The observed Sharpe ratio from trading a top 2 asset system on the 1st day of the following month based on signals from the last day of the month is in the 90th percentile of all observations. Huh.

Interestingly, the 11th is one of the worst days to trade for drawdowns, at least for the 2 asset system, though trading on the 27th and 31st produces the worst drawdowns for 2 and 3 asset systems, respectively. The trade date which produces the smallest drawdowns for both systems is the 25th.

Novice readers may be tempted to believe that, if they stick to trading on the first day of the month based on signals from the last day, they will continue to generate better results than if they trade on different days. Others may want to switch their monthly trade date to the date which has historically optimized Sharpe ratio or returns. We would urge you to resist these temptations. While some may claim that structural effects like institutional position squaring may provide stronger signals toward the end of the month, there is no evidence in the data that supports this conclusion. It is almost certainly just more random noise.

Rather than using this information to change trade dates, we would encourage you to alter your expectations instead. One way to manage expectations is to expect performance near the middle of, or nearer the bottom of, the historical performance distribution. Figure 2 describes the quantiles of performance across all trade dates for the two systems under investigation.

Figure 2. Quantile performance of top 2 and top 3 asset 6-month momentum systems across days of the month

Quantiles_Source: Data from Bloomberg

We would guide expectations toward 5th percentile values because in practice, if performance exceeds the 5th percentile “line in the sand,” it is reasonable to believe that the strategy is performing within the distribution of its expected returns. If it delivers persistent performance below this level it might be fair to wonder if there is a genuine flaw in the investment methodology. For example, if you are contemplating trading a simple monthly 3-asset momentum system with 6-month lookback horizon, you might expect a Sharpe (pre-fees, costs and slippage) of about 0.9, and a maximum drawdown of about 38%. (For those interested in ways to improve on this simple strategy, may we humbly suggest that you explore our ‘Dynamic Asset Allocation for Practitioners‘ series.)

This might be a hard pill to swallow for novice quants who are applying (or considering applying) a monthly system, but in our opinion it’s better to know the risks in advance. It’s also a reason to test strategies using daily data, as monthly periodicity will dramatically understate risk parameters, especially drawdowns. Candidly, if you are invested in a strategy that trades monthly based on a monthly backtest or even a real-time track record, you are probably taking considerably more risk than you think.





Valuation Based Equity Market Forecasts: Q2 2014


Any analysis that relies on the past to offer guidance about the future makes the strong assumption that the future will in fact resemble the past. We have no guarantee that this will be the case. Many optimistic analysts assert that the invention of central banking, global communications and trade, robotics, 3D printing, Paul Krugman, or any number of ‘game changers’ that have evolved over the past few decades renders comparisons with our past misguided. Surely we won’t make the mistakes of our ancestors; there will be no more war, no misguided political decisions, no shortsighted thinking, no natural disasters, no panics or conflicts or excesses which derail our arc toward ever-increasing prosperity.

In case there is any ambiguity, we do not espouse this Polyanna-esque view. So long as markets, economies and politics are dominated by human judgement, the future is likely to resemble the past in most important respects.

Furthermore, there is, and perhaps always will be, a discussion of whether the stock market is in a ‘bubble’ or whether it is undervalued, overvalued or fairly valued. These labels are meaningless. In reality, markets are always at the ‘right price’ because the entire objective of free markets is to find the clearing price for assets, and the ‘right price’ is the price at which an asset transaction clears between a buyer and a seller. The right price might be higher, from a valuation standpoint, than the historical average implying lower than average future returns, or lower than the historical average implying higher than average future returns. The concepts of ‘right price’ and ‘overvalued’ are not mutually exclusive.

Moreover, we are acutely aware that interest rates are a discounting mechanism and thus low interest rates (especially rates which are expected to remain low for a long time) may justify higher than average equity valuations. This may be a normal condition of asset markets, but it doesn’t alter forecasts about future returns. While markets might be ‘fairly priced’ at high valuations relative to exceedingly low long-term interest rates, this does not change the fact that future returns are likely to be well below average. Again, a market can be ‘fairly priced’ relative to long-term rates, yet still exhibit high valuations implying lower than average future returns. We wouldn’t argue with the assertion that current conditions exhibit these very qualities. However, this fact does not change ANY of the conclusions from the analysis below.

—————————————————————————

We endorse the decisive evidence that markets and economies are complex, dynamic systems which are not reducible to linear cause-effect analysis over short or intermediate time frames. However, the future is likely to rhyme with the past. Thus, we believe there is substantial value in applying simple statistical models to discover average estimates of what the future may hold over meaningful investment horizons (10+ years), while acknowledging the wide range of possibilities that exist around these averages.

To be crystal clear, the commentary below makes no assertions whatsoever about whether markets will carry on higher from current levels. Expensive markets can get much more expensive in the intermediate term, and investors need look no further back than the late 2000s for just such an example. However, the historical implications of investing in expensive markets is that, at some point in the future, perhaps years from now, the market has a very high probability of trading back below current prices; perhaps far below. More importantly, investors must recognize that buying stocks at very expensive valuations will necessarily lead to future returns over the subsequent 10 – 20 years that are far below average.

Many studies have attempted to quantify the relationship between Shiller PE and future stock returns. Shiller PE smoothes away the spikes and troughs in corporate earnings which occur as a result of the business cycle by averaging inflation-adjusted earnings over rolling historical 10-year windows. As discussed above, in this version of our analysis, we have incorporated two new earnings series to address thoughtful concerns raised by other analysts in recent commentaries. We added the Bloomberg series <T12_ESP_AGGTE> and the S&P 500 operating earnings series to account for potentially meaningful changes to GAAP accounting rules in 2001. I would note that Bill Hester at Hussman Funds has addressed this issue comprehensively in a recent report, which we would strongly encourage you to read. Notwithstanding the arguments against using these new series, we felt they offered sufficient merit to include them in our analysis. All CAPE related analyses in this report use a simple average of these three earnings series to calculate the denominator in the CAPE ratio.
[In addition, we reiterate that the final multiple regression model that we use to generate our forecast does not actually include the CAPE ratio as an input. As valuation measures go, this metric is actually less informative than the other three, and reduces the statistical power of the forecast.]
This study also contributes substantially to research on smoothed earnings and Shiller PE by adding three new valuation indicators: the Q-Ratio, total market capitalization to GNP, and deviations from the long-term price trends. The Q-Ratio measures how expensive stocks are relative to the replacement value of corporate assets. Market capitalization to GNP accounts for the aggregate value of U.S. publicly traded business as a proportion of the size of the economy. In 2001, Warren Buffett wrote an article in Fortune where he states, “The ratio has certain limitations in telling you what you need to know. Still, it is probably the best single measure of where valuations stand at any given moment.” Lastly, deviations from the long-term trend of the S&P inflation adjusted price series indicate how ‘stretched’ values are above or below their long-term averages.
These three measures take on further gravity when we consider that they are derived from four distinct facets of financial markets: Shiller PE focuses on the earnings statement; Q-ratio focuses on the balance sheet; market cap to GNP focuses on corporate value as a proportion of the size of the economy; and deviation from price trend focuses on a technical price series. Taken together, they capture a wide swath of information about markets.
We analyzed the power of each of these ‘valuation’ measures to explain inflation-adjusted stock returns including reinvested dividends over subsequent multi-year periods. Our analysis provides compelling evidence that future returns will be lower when starting valuations are high, and that returns will be higher in periods where starting valuations are low.

Again, we are not making a forecast of market returns over the next several months; in fact, markets could go substantially higher from here. However, over the next 10 to 15 years, markets are very likely to revert to average valuations, which are much lower than current levels. This study will demonstrate that investors should expect 6.5% real returns to stocks only during those very rare occasions when the stock market passes through ‘fair value’ on its way to becoming very cheap, or very expensive. At all other periods, there is a better estimate of future returns than the long-term average, and this study endeavors to quantify that estimate.

Investors should be aware that, relative to meaningful historical precedents, markets are currently expensive and overbought by all measures covered in this studyindicating a strong likelihood of low inflation-adjusted returns going forward over horizons of 10-20 years.
This forecast is also supported by evidence from an analysis of corporate profit margins. In a recent article, Jesse Livermore at Philosophical Economics published a long-term chart of adjusted U.S. profit margins, which demonstrates the magnitude of upward distortion endemic in current corporate profits, which we have reproduced in Chart 1 below. Companies have clearly been benefiting from a period of extraordinary profitability.
Chart 1. Inflated U.S. adjusted profit margins
nonfincp

 Source: Philosophical Economics, 2014

The profit margin picture is critically important. Jeremy Grantham recently stated, “Profit margins are probably the most mean-reverting series in finance, and if profit margins do not mean-revert, then something has gone badly wrong with capitalism. If high profits do not attract competition, there is something wrong with the system and it is not functioning properly.” On this basis, we can expect profit margins to begin to revert to more normalized ratios over coming months. If so, stocks may face a future where multiples to corporate earnings are contracting at the same time that the growth in earnings is also contracting. This double feedback mechanism may partially explain why our statistical model predicts such low real returns in coming years. Caveat Emptor.

Modeling Across Many Horizons

Many studies have been published on the Shiller PE, and how well (or not) it estimates future returns. Almost all of these studies apply a rolling 10-year window to earnings as advocated by Dr. Shiller. But is there something magical about a 10-year earnings smoothing factor? Further, is there anything magical about a 10-year forecast horizon?
Table 1. below provides a snapshot of some of the results from our analysis. The table shows estimated future returns based on a coherent aggregation of several factor models over some important investment horizons.
Table 1. Factor Based Return Forecasts Over Important Investment Horizons
Forecast_Summary
Source: Shiller (2013), DShort.com (2013), Chris Turner (2013), World Exchange Forum (2013), Federal Reserve (2013), Butler|Philbrick|Gordillo & Associates (2013)
You can see from the table that, according to a model that incorporates valuation estimates from 4 distinct domains, and which explains over 80% of historical returns since 1928, stocks are likely to deliver less than 0% in real total returns over the next 5 to 20 years. Budget accordingly.

Process

The purpose of our analysis was to examine several methods of capturing market valuation to determine which methods were more or less efficacious. Furthermore, we were interested in how to best integrate our valuation metrics into a coherent statistical framework that would provide us with the best estimate of future returns. Our approach relies on a common statistical technique called linear regression, which takes as inputs the valuation metrics we calculate from a variety of sources, and determines how sensitive actual future returns are to contemporaneous observations of each metric.
Linear regression creates a linear function, which by definition can be described by a slope value and an intercept value, which we provide below for each metric and each forecast horizon. A further advantage of linear regression is that we can measure how confident we can be in the estimate provided by the analysis. The quantity we use to measure confidence in the estimates is called the R-Squared. The following matrices show the R-Squared ratio, regression slope, regression intercept, and current forecast returns based on a regression analysis for each valuation factor. The matrices are heat-mapped so that larger values are reddish, and small or negative values are blue-ish. Click on each image for a large version.
Matrix 1. Explanatory power of valuation/future returns relationships
r_squared_matrix
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014), Butler|Philbrick|Gordillo & Associates (2014)
Matrix 1. contains a few important observations. Notably, over periods of 10-20 years, the Q ratio, very long-term smoothed PE ratios, and market capitalization / GNP ratios are equally explanatory, with R-Squared ratios around 55%. The best estimate (perhaps tautologically given the derivation) is derived from the price residuals, which simply quantify how extended prices are above or below their long-term trend. The worst estimates are those derived from trailing 12-month PE ratios (PE1 in Matrix 1 above). Many analysts quote ‘Trailing 12-Months’ or TTM PE ratios for the market as a tool to assess whether markets are cheap or expensive. If you hear an analyst quoting the market’s PE ratio, odds are they are referring to this TTM number. Our analysis slightly modifies this measure by averaging the PE over the prior 12 months rather than using trailing cumulative earnings through the current month, but this change does not substantially alter the results. As it turns out, TTM (or PE1) Price/Earnings ratios offer the least information about subsequent returns relative to all of the other metrics in our sample. As a result, investors should be extremely skeptical of conclusions about market return prospects presented by analysts who justify their forecasts based on trailing 12-month ratios.

Forecasting Expected Returns

We expect you to be skeptical of our unconventional assertions, so below we provide the precise calculations we used to determine our estimates. The following matrices provide the slope and intercept coefficients for each regression. We have provided these in order to illustrate how we calculated the values for the final matrix below of predicted future returns to stocks.
Matrix 2. Slope of regression line for each valuation factor/time horizon pair.
Slope_Matrix
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014), Butler|Philbrick|Gordillo & Associates (2014)
Matrix 3. Intercept of regression line for each valuation factor/time horizon pair.
Intercept_Matrix
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014), Butler|Philbrick|Gordillo & Associates (2014)
Matrix 4. shows forecast future real returns over each time horizon, as calculated from the slopes and intercepts above, by using the most recent values for each valuation metric (through June 2014). For statistical reasons which are beyond the scope of this study, when we solve for future returns based on current monthly data, we utilize the rank in the equation for each metric, not the nominal value. For example, the 15-year return forecast based on the current Q-Ratio can be calculated by multiplying the current ordinal rank of the Q-Ratio (1343) by the slope from Matrix 2. at the intersection of ‘Q-Ratio’ and ’15-Year Rtns’ (-0.000086), and then adding the intercept at the same intersection (0.118875) from Matrix 3. The result is 0.003, or 0.30%, as you can see in Matrix 4. below at the same intersection (Q-Ratio | 15-Year Rtns).
Matrix 4. Modeled forecast future returns using current valuations.
Forecast_Matrix1
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014), Butler|Philbrick|Gordillo & Associates (2014)
Finally, at the bottom of the above matrix we show the forecast returns over each future horizon based on a weighted average of all of the forecasts, and again by our best-fit multiple regression from the factors above. From the matrix, note that forecasts for future real equity returns integrating all available valuation metrics are less than 2% per year over horizons covering the next 5 to 20 years. We also provide the R-squared for each multiple regression underneath each forecast; you can see that at the 15-year forecast horizon, our regression explains over 80% of total returns to stocks.
Chart 2. below demonstrates how closely the model tracks actual future 15-year returns. The red line tracks the model’s forecast annualized real total returns over subsequent 15-year periods using our best fit multiple regression model . The blue line shows the actual annualized real total returns over the same 15-year horizon.

Chart 2. 15-Year Forecast Returns vs. 15-Year Actual Future Returns

Chart
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014)

Putting the Forecasts to the Test

A model is not very interesting or useful unless it actually does a good job of predicting the future. To that end, we tested the model’s predictive capacity at some key turning points in markets over the past century or more to see how well it predicted future inflation-adjusted returns.
Table 2. Comparing Long-term average forecasts with model forecasts
Examples_Matrix
Source: Shiller (2014), DShort.com (2014), Chris Turner (2014), World Exchange Forum (2014), Federal Reserve (2014), Butler|Philbrick|Gordillo & Associates (2014)
You can see we tested against periods during the Great Depression, the 1970s inflationary bear market, the 1982 bottom, and the middle of the 1990s technology bubble in 1995. The table also shows expected 15-year returns given market valuations at the 2009 bottom, and current levels. These are shaded green because we do not have 15-year future returns from these periods yet. Observe that, at the very bottom of the bear market in 2009, real total return forecasts never edged higher than 7%, which is only slightly above the long-term average return. This suggests that prices just approached fair value at the market’s bottom; they were nowhere near the level of cheapness that markets achieved at bottoms in 1932 or 1982. As of the end of June 2014, annualized future returns over the next 15 years are expected to be less than zero percent.
We compared the forecasts from our model with what would be expected from using just the long-term average real returns of 6.5% as a constant forecast, and demonstrated that always using the long-term average return as the future return estimate resulted in 500% more error than estimations from our multi-factor regression model over 15-year forecast horizons (1.11% annualized return error from our model vs 5.49% using the long-term average). Clearly the model offers substantially more insight into future return expectations than simple long-term averages, especially near valuation extremes (dare we say, like we observe today?)

Conclusions

The ‘Regression Forecast’ return predictions along the bottom of Matrix 4. are robust predictions for future stock returns, as they account for over 100 different cuts of the data, using 4 distinct valuation techniques, and utilize the most explanatory statistical relationships. Notwithstanding the statistical challenges described above related to overlapping periods, the models explain a meaningful portion of future returns. Despite the model’s robustness over longer horizons, it is critical to note that even this model has very little explanatory power over horizons less than 6 or 7 years, so the model should not be used as a short-term market-timing tool.
Returns in the reddish row labeled “PE1″ in Matrix 4 were forecast using just the most recent 12 months of earnings data, and correlate strongly with common “Trailing 12-Month” PE ratios cited in the media. Matrix 1. demonstrates that this trailing 12 month measure is not worth very much as a measure for forecasting future returns over any horizon. However, the more constructive results from this metric probably helps to explain the general consensus among sell-side market strategists that markets will do just fine over coming years. Just remember that these analysts have no proven ability whatsoever to predict market returns (see herehere, and here). This reality probably has less to do with the analytical ability of most analysts, and more to do with the fact that most clients would choose to avoid investing in stocks altogether if they were told to expect negative real returns over the long-term from high valuations.
Investors would do much better to heed the results of robust statistical analyses of actual market history, and play to the relative odds. This analysis suggests that markets are currently expensive, and asserts a very high probability of low returns to stocks (and possibly other asset classes) in the future. Remember, any returns earned above the average are necessarily earned at someone else’s expense, so it will likely be necessary to do something radically different than everyone else to capture excess returns going forward. Those investors who are determined to achieve long-term financial objectives should be heavily motivated to seek alternatives to traditional investment options given the grim prospects outlined above. Such investors may find solace in some of the approaches related to ‘tactical alpha’ that we have described in a variety of prior articles.
————————————————–
Considerations & Next Steps

We first published a valuation based market forecast in September of 2010. At that time we used only the Shiller PE data to generate our forecast, and our analysis suggested investors should expect under 5% per year after inflation over the subsequent 10 year horizon. Over the 40 months since we have introduced several new metrics and applied much more comprehensive methods to derive our forecast estimates. Still, our estimates are far from perfect.

From a statistical standpoint, the use of overlapping periods substantially weakens the statistical significance of our estimates. This is unavoidable, as our sample only extends back to 1900, which gives us only 114 years to work with, and our research suggests that secular mean reversion exerts its strongest influence on a periodicity somewhere between 15 and 20 years. As a result, our true sample size is somewhere between 5 and 6, which is not very high. 

Aside from statistical challenges, readers should consider the potential for issues related to changes in the way accounting identities have been calculated through time, changes to the geographic composition of earnings, and myriad other factors. For a comprehensive analysis of these challenges we encourage readers to visit the Philosophical Economics (PE) blog.

It should be noted that, while we recommend readers take the time to consider the comprehensive analyses published by Philosophical Economics over the past few months, we are rather skeptical of some of the author’s more recent assertions. In particular, we challenge the notion that model errors related to dividends, growth and valuations are independent of one another, and can therefore be disaggregated in the way the author presents. Dividend yields and earnings growth are inextricably and causally related to each other (lower dividend payout ratios are causally related to stronger future earnings growth because of higher rates of reinvestment, and vice versa), thus we would expect them to have inverse cumulative error terms. The presence of such inverse error terms simply proves this causal link empirically, and offers no meaningful information about the validity of valuation based reversion models.

We also take grave issue with the contention that the valuation-based reversion observed in stocks over periods of 10 years (which Hussman uses) should be dismissed as ‘curve fitting’. According to PE, since we observe some reversion over 10 year periods we should observe the same or stronger reversion over a 30 year horizon, because 30 years is a multiple of 10 years. But the author discovers virtually no explanatory relationship at 30 year reversion horizons. From this observation he concludes that valuation-based mean-reversion at the 10 year horizon is invalid, and that the 10 year reversion relationship is a curve-fit aberration with no statistical significance. 

The logical flaw in this argument is revealed by an examination of our own results. Our analysis suggests markets exhibit most significant mean reversion at periods of 15 to 20 years (though 10 years is still significant). This suggests that markets will have completed a full cycle (that is, will have come ‘full circle’) after 30 or 40 years. In other words, if markets are currently expensive, then 15 or 20 years from now they will probably be cheap. Of course, 15 or 20 years on from that point (or 30 to 40 years from now) markets will have returned to their original expensive condition. Thus no mean-reversion relationship will be observed over 30-40 years. Any analyses of mean reversion over periods of 30 or 40 years will not find any relationship because cheap prices will have passed through expensive prices and gone back to cheap over the span of the full cycle. 

In case this is still unclear, consider a similar concept in a related domain: stock momentum. Eugene Fama, the father of ‘efficient markets’ described the momentum anomaly in equities as ‘the premier unexplained anomaly’, yet it only works at a frequency of about 2 to 12 months; that is, if you buy a basket of stocks that had the greatest price increase over the prior 2 to 12 month period, those stocks are likely to be among the top performers over the next few weeks or months. However, momentum measured over a 3 to 5 year period works in precisely the opposite manner: the worst performing stocks over the past 3 to 5 years generate the strongest returns. Now, 3 and 5 years are multiples of 12 months, yet extending the analysis to multiples of the 12 month frequency delivers the exact opposite effect. Should we dismiss the momentum anomaly as curve fitting then?

Eugene Fama doesn’t think so, and neither do we.

That said, we see value in the questions PE raised about the changing nature of earnings series and margin calculations. Largely driven PE’s analysis, we integrated new earnings series from Bloomberg and S&P into our Cyclically Adjusted PE calculation for model calculations. Primarily, the new series adjust earnings for changes to GAAP rules in 2001 related to corporate write-downs. Each of the series has merit, so we took the step of averaging them without prejudice. I’m sure bulls and bears alike will find this method unsatisfying; we certainly hope so, as the best compromises have this precise character.

Importantly, the new earnings series do not alter the final regression forecast model results because our multiple regression model rejects the Shiller PE as statistically insignificant to the forecast. That is, it is highly collinear with, but less significant than, other series like market cap/GNP and q ratio. This has been the case from the beginning of this article series, so it isn’t due to the new earnings data. Nevertheless we include regression parameters and r-squared estimates for all of the modified Shiller PEs in the matrices as usual.

The bottom line is that, despite statistical and accounting challenges, our indicators have proved to be of fairly consistent value in identifying periods of over and under-valuation in U.S. markets over about a century of observation, notwithstanding the last two decades. We admit that the two decades since 1994 seem like strange outliers relative to the other seven decades; history will eventually prove whether this anomaly relates to a structural change in the calculation of the underlying valuation metrics, a regime shift in the range of possible long-term returns, an increase in the ambient slope of drift, or something as of yet entirely unconsidered.

We all must acknowledge that the current globally coordinated monetary experiment truly has no precedent in modern history. For this reason the range of potential outcomes is much wider than it might otherwise be. Things could persist for much longer, and reach never before seen extremes (in both directions, mind you!) before it’s over.

Lastly, I am struggling to reconcile a conundrum I identified very early in the development of our multi-factor model. Namely, the fact that the simple regression of real total returns with reinvested dividends carries very different implications than the suite of other indicators we have tested. Georg Vrba explores this model in some detail, and we recommend readers take a moment to consider his views in this domain. I am troubled by the theoretical veracity of incorporating dividend reinvestment for extrapolation purposes, because the vast majority of dividends are NOT reinvested, but rather are paid out, and represent a material source of total income in the economy. However, the trend fit is surprisingly tight, and I can’t say with conviction that the model is any less valid than the other methods we apply in this analysis. It is a puzzle.

The bottom line is that as researchers, we deeply appreciate respectful disagreements. This is because there are only three possible outcomes for the introduction of new and contrary evidence into a discussion. First, it could prove worthy of inclusion, improving the accuracy and timeliness of estimates. The inclusion of new earnings series is an excellent example of this, as we’ve been convinced by the evidence presented by Jesse Livermore in this domain. Second, it could prove unworthy of inclusion, which we can only know upon stress-testing our model, thereby increasing our confidence in its reliability. The rejection of the notion that valuation-based reversion is curve-fitting would be such an example. Or third, it could provoke a ‘deep dive’ that evolves into a complete overhaul of current models, and/or an interesting future research publication. Vrba’s dividend reinvestment notion is a prominent member of this category.

We thank everyone – on all sides of the debate – for their continued contributions (either explicitly or implicitly) to the ongoing evolution of this research.





World Cup Outcomes Are Mostly Random: So Who Cares?


This post will be short and sweet as it’s largely an addendum to our previous post NFL Parity, Sample Size and Manager Selection.  It was motivated primarily by an interesting analysis by Tom Murphy, a physics professor at the University of California – San Diego. We greatly admire Dr. Murphy and highly recommend his blog.

Like many North Americans, Dr. Murphy doesn’t appear to be a huge fan of the beautiful game. Unlike most North Americans, his displeasure relates to an ingenious, if sterile, statistical analysis of game outcomes which concludes that soccer games are simply “well executed random events.” His article about World Cup – nay, all soccer outcomes in general – is interesting for several reasons.

First, sport outcomes and investment outcomes are dictated by different types of distributions.  In our previous calculations about the “true superiority” of an investment strategy, we used a t-distribution.  This satisfied a number of criteria we had in making the calculation, including:

  1. It allowed for the full range of potential outcomes, from a 100% loss to an infinite gain;
  2. It better approximated the platykurtic reality (fat tails relative to a normal distribution) of investment returns, and;
  3. It provided an “accurate-enough” distribution of returns relative to our investment of time in developing the algorithm.

Of course, sports outcomes are not subject to similar characteristics as investment returns and in our previous post we used the same t-distribution for analyzing sports outcomes.  For this purpose, however, a Poisson distribution is a far more useful tool because:

  1. We’re measuring scores, which are a discrete outcome of every game;
  2. The average score over time is a known, measurable number;
  3. Given #2, the probability that the average will be achieved is proportional to the amount of time (or games) measured, and;
  4. Given #3, the probability that a score will occur as the sample size (or number of games) approaches zero is zero; there are no negative score outcomes.

Professor Murphy, in his article, lays out a simple example:

“We can turn the Poisson distribution around, and ask: if a team scores N points, what is the probability (or more technically correct, the probability density) that the underlying expectation value is X? This is more relevant when assessing an actual game outcome.  An example appears in the plot below.  The way to read it is: if I have an expectation value of <value on the horizontal axis>, what is the probability of having 2 as an outcome? Or inversely—which is the point—if I have an outcome of 2, what is the probability (density) of this being due to an expectation value of <value on the horizontal axis>?”inverse-poisson

The nuances of the distributions notwithstanding, our conclusions on sports and investment outcomes still stand.  Namely:

“NFL parity – and far too often, investment results – are both mirages.  Small sample sizes in any given NFL season and high levels of covariance between many investment strategies make it almost impossible to distinguish talent from luck over most investors’ investment horizons.  Marginal teams creep into the playoffs and go on crazy runs, and average investment managers have extended periods of above-average performances.”

This brings us to the second major point, which is that unlike investments where we gravitate toward risk management, in sports we tend to gravitate towards risk maximization.  In other words, to the extent that we don’t have a vested interest in the outcome, when watching a sporting event the best we can hope for is an exciting match.  In this regard, we believe the good professor has his thinking exactly right and exactly backward when, in describing why he doesn’t enjoy watching the World Cup, he says:

“…I don’t follow soccer—in part because I suspect it boils down to watching well-executed random events…What I have seen (and I have been to a World Cup game) seems to amount to a series of low-probability scoring attempts, where the reset button (control of the ball) is hit repeatedly throughout the game. I do not see a lot of long-term build-up of progress. One minute before a goal is scored, the crowd has no idea/anticipation of the impending event. American football by contrast often involves a slow march toward the goal line. Basketball has many changes of control, but scoring probability per possession is considerably higher. Baseball is a mixture: as bases load up, chances of scoring runs ticks upward, while the occasional home run pops up at random…”

Again, his characterization is spot on but his conclusion is completely wrong.  Nike seems to understand the appeal of World Cup risk with their new campaign “Risk Everything,” and the accompanying slogan “Playing it safe is the biggest risk.”

And then there’s also this video, which has over 56 million youtube hits:

It’s not in spite of the randomness, it’s because of it that the world consumes as many World Cup games as possible.  And while I’m at it, it’s why we wait with baited breath to see LeBron posterize some poor guy, why we so deeply treasure the memory of that triple play that one time, and why we recall the Immaculate Reception more clearly than just about any football play ever.

Professor Murphy pillories soccer because of it’s randomness, but we wonder if deep down, wherever he has secreted away his sense of whimsy, he would admit that his fondness for his most treasured sports memories are largely due to their incredible randomness. But we digress…

Oh geez: now I’ve wasted time on the World Cup too!

- See more at: http://physics.ucsd.edu/do-the-math/2014/06/tuning-in-on-noise/#sthash.LLN9dbhN.dpuf

- See more at: http://physics.ucsd.edu/do-the-math/2014/06/tuning-in-on-noise/#sthash.LLN9dbhN.dpuf




Article in Taxes & Wealth Management


The Miller Thompson / Reuters monthly Taxes and Wealth Management newsletter carried an article we authored on the relationship between portfolio volatility and retirement planning.  This is a fairly regular topic on GestaltU and in discussions with clients because it’s critical to success but not well understood.  We are pleased to have been selected for publication, and hope that readers found value in our contribution.

Our article begins on page 14, but there’s lots of meaty material in there.

Taxes And Wealth Management May 2014