A study has come out of Cass Business School that investigates a number of ways of building equity indices. Andrew Clare, Nicholas Motson and Stephen Thomas, of course, include market capitalization weighting. A number of schemes that fall under the name of “smart beta” are also included.
They compare the indices not only among themselves but to a cohort of random portfolios. When I say “random portfolio”, I mean that there are constraints to be obeyed. But the only constraints that the random portfolios in the paper obey is that the weights are non-negative and sum to 100%.
Best part
My favorite part is Figure 7 of Part 1 that shows the proportion of random portfolios — monkeys, as they say — that outperform the cap-weighted index over three-year rolling windows.
This puts the cart and the horse in the proper order, I think. It is presenting cap-weighting as just another strategy (which does have some interesting properties) rather than as the market. Interestingly it is almost always completely outperforming the random portfolios or completely outperformed by them (but see below). There is much more of the latter than the former.
You might hypothesize that flows into and out of index funds would have a big impact on the performance of cap-weighting. If it does, I don’t think it is evident from the picture. Cap weighting did well in the early 70’s and then poorly for a decade.
Maybe questionable
I’m concerned that ignoring constraints isn’t the best thing to do. (I would say that, wouldn’t I?) Fair enough that cap-weighting is compared to portfolios with no specific constraints.
But some of the other strategies — while not created with constraints — rather imply constraints. For example, minimum variance sort of implies that wildly high volatilities wouldn’t be desireable. Likewise, maximum dispersion would imply that large concentrations wouldn’t be tolerated well.
Another concern with the lack of constraints is that what “random portfolio” means when there are minimal constraints becomes quite philosophical. For instance, the way that the “monkeys” were generated is to have a little noise around equal weighting.
Adding practical constraints makes the problem much harder computationally, but easier conceptually.
I back-tested one of the recommendations they mentioned incidentally in the paper – simple monthly trend-following – and found a large reference-day effect that they didn’t seem to be aware of.
I wonder how many of the other strategies in the paper are subject to reference-day effects.
It’s not often that you get to read that “nearly every monkey beats the performance of the market-cap index”, though, so the paper has that going for it.
@Andrew how did you test for this? can you elaborate? thank you!
Hi Mike,
To test it, I downloaded S&P 500 end-of-day data back to 1950 or thereabouts (from Yahoo Finance, I believe) and wrote a Python script that implemented the trend-following algorithm.
The results I got were very strong for trend-following on the first day of the month, but much weaker for other days. I didn’t see any particular pattern among the other days of the month. I didn’t check to see whether different decades saw different results for the algorithm, so it might be the case that all the above-market results came from the ’50s and ’60s and the approach would be useless now. (Or maybe it’d be the other way around; I don’t know.)
I contacted one of the study’s authors (Andrew Clare) with my results, and he told me that, “We have (in unpublished work) undertaken a lot of work on variants of the trend following rule, including the day on which you trade. We have not found any significant difference between the approaches ourselves.”
Trying the same algorithm on the Dow Jones index back to 1900 or thereabouts produced completely different, much more random results for me.
I’ve since lost interest in trading algorithms; for a small investor like myself, intensive research of companies with little or no existing analyst coverage seems like a bet that’s more likely to pay off. Drawing valid conclusions from back-testing requires an amount and quality of data that’s very, very expensive (e.g. COMPUSTAT); my half-baked impression when I was all done was that I was getting more-or-less random results that changed by the decade because I was using a small number of data points (only 12 per year) to explore a ridiculously complex system (the stock market).
Pingback: Popular posts 2013 April | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics