Connect with us

Market Strategist

Are We Baking Portfolios with Bad Ingredients?


Are We Baking Portfolios with Bad Ingredients?

Written by: Tommi Johnsen

It’s been said that baking is like chemistry for hungry people.

Meaning, it’s complicated.

Just one mistake in the recipe or measurement of your ingredients and your bread can wind up with no bread-like qualities at all.

Investing can be the same.

Good ingredients or data, and the proper measurement and comparison of them is key.

As with other things:  Bad data in = bad data out.

Key ingredients in modern portfolio construction models are correlation and volatility metrics.  Billions of dollars can be influenced by comparisons like the following chart in Table 1, which illustrates how various asset classes have moved as compared to other asset classes (their correlations) and how risky they are as compared to one another (their volatility or standard deviation).

This chart is broadly distributed each quarter by a leading asset management and wealth firm, followed by a widely attended guide-to-the-markets call with their clients and large asset owners.

Calculations are made based on the returns of each asset class and the quality of the return ingredients and measurements as compared to other asset classes is critical.

Unfortunately, when looking closely at the data from Table 1, and many others like it, the quality of its ingredients and how they are mixed together looks to be questionable.

To highlight the issues that are making us question the stability of some of the data in this table, and others like it, below are five concerns, followed by possible solutions.  Hopefully this will help investors formulate questions about the ingredients used in presentations like this and help investors avoid recipes that might sour allocations.

Concern #1 – Apples and Oranges

If someone was trying to bake an apple pie, you would want to make sure a recipe didn’t mix in oranges.  This is what is being done, however, in the above listed Table 1 (see the section of a disclosure that we have bolded below in Table 2).

You have to look closely, but if you do, you find the following:

Public market returns are as of the 10-year period ending 6/30/19.


Private market returns are as of 10-year period ending 12/31/18.

For correlations or volatility comparisons to be sound, returns need to cover the same time period – period.

The Solution: 

If data periods do not match exactly, the correlation should be viewed as suspect and subject to discarding all together.

Concern #2 – Comparisons That Aren’t Recommended:

Private market index providers commonly recommend that the private market returns used in their indexes not be compared to public market returns, yet this is exactly what the chart in Table 1 does.

Public market returns are based on assets that are priced every second of the day on public exchanges. They represent the actual return an investor would have received per dollar invested in that asset.

Private market returns are often based on internal rates of return, which can be significantly influenced by the timing of cash flows and credit lines. Unlike public market returns, private returns may not represent the actual return that an investor has received.  In addition, quarterly returns may be based on subjective valuations from the managers of the private funds.

Again – not apples to apples.

The Solution:

If you find language like this related to the data used (see the bolded disclosures in Table 2 below) – discard.

“Due to the fundamental differences between [how private equity and public market returns are calculated], direct comparison . . . is not recommended.”

Concern #3 – Smoothing

Much research has been written about the problems of what is called private equity return smoothing, including a relatively humorous take on a serious subject from Cliff Asness discussing AQR’s new S.M.O.O.T.H. fund.

As Asness wrote promoting the virtues of this vehicle:

“while 2018 was a very painful year for…  virtually all traditional liquid asset classes and most geographies (e.g., long-only stocks and bonds), the S.M.O.O.T.H. Fund would’ve sailed through largely unscathed.”

To help drive this home, according to the disclosures that accompanied the chart in Table 1, below is a recreation of part of the 12-31-18 return series for Private Equity that was used to create the correlation and standard deviation matrix.

We have a hard time understanding how one classification of equities was down 13-20% for the latest quarter but yet another, which tends to be more highly levered and small-cap in nature, was down less than 1%.

Looking at the complete year, it easy to understand why Cliff and AQR were so excited about their S.M.O.O.T.H. strategy when an index made up of other funds that follow a similar approach were up approximately 10% versus small-cap public indices being down well over 10%.

Considering that the chart in Table 1 discloses that they use the return streams that are in Table 2, it’s no wonder that the standard deviation of returns and correlations are lower as compared to public markets.

In terms of portfolio construction, using this data in an optimization model of most any type would allocate larger pieces of the pie to private equity at the expense of public markets.

 The Solution:

 If one type of return is not smoothed and another is subject to what may be significant smoothing, at least discount the data or discard – again.

Concern #4 – Survivorship Bias

Vendors that offer returns on private equity may include data that is subject to survivorship bias.

 The contents of a data sample may be inadvertently stacked in favor of positive results if the PE managers included are exclusively “survivors”. It may be that the managers who fail to report returns do so due to poor performance and excluding managers that have stopped reporting, could create a favorable bias in the series of returns being measured.

 The private equity indices provided by vendors such as Cambridge Associates acknowledge this in their documents by saying the following:

“When fund managers stop reporting before their fund’s return history is complete, an element of “survivorship bias” may be introduced to a performance database, which could skew the reported returns upwards if the funds dropping out had poorer returns than those funds that remained.”

CA does go on to say that they feel that “compared public stocks and hedge funds… the illiquid nature of private investments can… limit this survivorship effect” and that the “performance of the small number of funds that have stopped reporting has been spread amongst all quartiles and has not been concentrated consistently in the poorer performing quartiles.”

This disclosure makes us feel more comfortable.

What continues to make us feel uncomfortable is that they also say that when a fund stops self-reporting data they “may, during this communication period, roll forward the fund’s last reported quarter’s net asset value for several quarters.”

This implies that some funds in the data may represent different time periods than others, which reminds us of the following language that we found in the disclosures at the bottom of the correlation chart:

“Private equity data are reported on a one- to two-quarter lag.”

Is it one or two quarters of lagged performance, and how many quarters are defined by the word “several”?


Even when using robust sets of data that cover long-term time periods, correlations can often be unstable due to natural and unpredictable changes in market returns.

When a data series leaves out returns or includes time periods that don’t match due to what may be “several” lags, beware.

The mixture of ingredients being put into your asset allocation pie might not be as stable as you would wish and – again – might warrant a discard.

Concern #5 – Not Enough Data

Correlations can vary with market cycles and charts like Table 1 that look at only one 10-year period with non-apples to apples comparisons are far from stable.

Research shows that for a broad range of asset classes, correlations tend to vary over different market cycles.   Correlations across asset classes may significantly increase in down markets or significantly decrease in up markets.  The financial crisis was a prime example of this.  Assets classes that had been illustrated as having moderate to low correlation to high risk assets become highly correlated quickly, at the wrong time.

Will the correlations and lower standard deviation of private equity as compared to public markets shown over this one 10-year period in Table 1 hold up when they are needed?

We hope so, but data that covers only one 10-year time period doesn’t give us much confidence one way or another.

The Solution:

We recommend calculating correlations during market upturns and market downturns, during normal market periods, quiet markets and turbulent markets and the variations in the correlations that result should be considered when making an allocation decision to any asset class.

Considering that Table 1 doesn’t do this, and then uses return comparisons that are not recommended, discard or at the least discount it – significantly.


As we dove into this subject and data, we learned a lot.

It will make us step back the next time we see correlation and standard deviation charts that compare public markets to private investments, and we suggest you do the same.

Bringing this back to how to bake better portfolios, maybe this story from Preston McSwain makes for a good ending.

As a family they enjoy baking together and one weekend when trying explain this to his kids he put it this way.

“If you were trying to bake the perfect cake would you trust a recipe that did not have precise measurements and had been tested only 4 times per year?”

The quick answer was…

“No way.”

And, for those who are not bakers, consider this, which Preston got from a client one day.

“So, what you are saying is that large dollars are being allocated based on data from only 4 times at bat per season.  Why would any manager allocate any of their contract dollars to a player if they had stats on only 4 visits to the plate per season over a player’s career?”

Four specific times per year, over just one 10-year period, are the amount of observations (times in the oven or at bat) represented in the chart in Table 1.

What is our answer to this client’s question?

No way – we wouldn’t allocate dollars based on this limited amount of data either.

In saying all of this, it doesn’t mean that we’re suggesting you throw out or discard charts like this all together.  Correlation and standard deviation metrics can be valuable tools.  They just need to be created and applied sensibly.

If they aren’t, they might produce portfolios that look good before they’re put into heated market ovens, but might not look so good when they come out.

For more information on the Research Roundtable visit us here.

Related: 21 Tips on How to Evaluate an Investment Adviser

Continue Reading