The London Quant Group Spring Seminar took place this Monday and Tuesday 2011 May 16-17. There were 9 talks — I give a brief (and biased) summary of each.
Dan di Bartolomeo
Dan talked about the information ratios that active managers have. He claims that the information ratio is upwardly biased compared to what we think they mean. A part of this claim is related to the observation that all active fund managers think they are above average, but (by construction) only about half can be above average. The key observation is that information ratios don’t account for uncertainty in the alpha.
Dan showed some approximations that are feasible to compute to get a statistic like the information ratio but with alpha uncertainty taken into account. Extraordinary information ratios tend to shrink a lot more than moderate information ratios.
Antti Ilmanen
Antti’s talk was mostly a selection of topics from his book Expected Returns.
One theme was that assets that have high volatility relative to other assets in their category tend to have lower expected returns. He showed examples of this in several asset classes. This is a theme that we return to.
Steve Greiner
Steve talked about simulations for Value at Risk. He uses the Cholesky decomposition to impose a constraint relative to other simulations. All his simulations use the same (estimated) variance matrix.
My own foray into the neighborhood of VaR simulations is “The Quality of Value at Risk via Univariate GARCH”.
Steve Ohana
This Steve is in search of warnings of impending busts. He highlighted a few possibilities.
The part of his talk that I liked the best was the title. He distinguishes black swans from rogue waves. The former are exogenous to finance, the latter are endogenous. We should worry more about rogue waves — especially since we have some sort of control over them.
Ely Klepfish
Ely took us through his scheme for doing what unfortunately has come to be known as robust portfolio optimization. That is, taking the variability of the inputs to the optimization into account. The ingredients to his method include cross entropy, shrinkage to a portfolio, and a set of optimizations on simulated data.
My favorite part of the talk was that Ely divided physics into three parts:
- naive
- classical
- quantum
He pointed out that finance can be divided that way as well. For example, naive (which should be considered as distinct from stupid) portfolio construction is to allocate the same amount to each category. Classical construction uses mean-variance optimization. Quantum construction takes uncertainty into account.
Uncertainty is the key bit to move into the quantum realm. With this frame of mind some of the other talks were quantum finance as well.
Matthew Rothman
Matthew covered a lot of ground.
He showed his estimates of the size of U.S. quantitative funds year by year. By his estimate they have continued to lose assets every year since the 2007 peak. Lose a lot of assets.
What ran through the whole talk was the quest for things to do so that quant funds don’t go extinct.
He did an experiment that gave rather pessimistic results for the value of the alpha that is yet to be discovered.
On the optimistic side he highlighted a few fun datasets to play with, and the implication that there are plenty more possibilities.
Harin de Silva
This talk had some overlap with another talk of his that inspired a blog post.
He talked about low volatility investing. He made a distinction between low volatility and minimum variance. Minimum variance has characteristics that other forms of low volatility might not share.
It was moved and seconded that the search for anomaly should be at the high volatility end rather than the low volatility end.
Steve Wright
Steve the third is in favor of people actually communicating with each other. In particular he wants quants and fundamental fund managers to communicate.
His attempt to make that happen involves scenarios. The financial implications of each scenario are found, and then the portfolio is changed based on the estimated probabilities of the scenarios.
Most interesting to me (but perhaps zero other people) is that some reasonably novel optimization might be done using this scheme.
Randy O’Toole
Randy essentially asked two questions about correlations of assets:
- are they forecastable?
- can we force positive-definiteness on them?
The answer to both questions is “yes”. His experiments at the asset allocation level show them to be almost as predictable as volatilities.
It is quite easy to change a supposed correlation matrix into a real correlation matrix. But you are going to have to give up something. Randy talked about two main ways of doing this (with some variations), and why you might prefer one to the other.
The good part
Some sizable fraction of the value of any conference is the informal discussions that happen during the interstices between the talks. This was no exception.
Thanks for the review, Patrick. A great idea. Its a wonder nobody has done it before.
Thankyou for your summary of my paper. you are absolutely correct I want people to communicate more.
I had not met Mathew Rothman before this seminar, but in some ways I see my paper as a follow on to his. He put into perspective the credibility problem that quants and quant funds are having post the credit crunch.
My analysis is that this is because we have allowed ourselves to become too insular and make simplifying assumptions that often work well but sometimes fail badly. We use backward looking risk models, we ignore fat tails, we take short cuts in our analysis correlating at the asset level rather than exploiting any knowledge we may have of the underlying structural relationships.
Fundamental fund managers in my view have a complimentary set of strengths and weaknesses so we should incorporate desirable features from mainstream quantitative models for those aspects that allow it while exploiting theory, experience and judgement for those aspects that do not.
To do this we need a reporting format that is accessible to both sides, hence my suggestion of extending linear factor models and mean variance analysis to allow for scenarios, rank reporting of user views, and optimisation of asset weights to give minimum variance across a set of alternative outcomes.
Steve,
Thanks for the clarification.
Great summary, Patrick, thank you! I will be a regular visitor to your website from now on!