Limited attention and …

One part of my academic research agenda deals with the effects of limited attention on professional investing. This paper uses marital events as a shock to attention and shows how managers behave differently when they’re getting married or divorced. We find that managers in general become less active in their trading/investing, suffer from behavioral biases more, and perform poorer.

This past semester, I moved from the Univ. of Florida to the Univ. of Alabama and prepped a couple of new classes. While not as stressful as marriage or divorce, I did devote a bit less time to my portfolio this semester … so what happened in my case?

The biggest change was (1) I rebalanced less… I probably ran my screen once or twice the entire semester to look for equities to deploy assets to. (2) I did not, even once, look for improvements to my screens or run any of my secondary screens.

The effects were not immediately felt on performance, but I suspect if I continued on this “autopilot” path, so to speak, the end result would be a less than manicured portfolio and eventually, a stale and less than robust investment screen. All in all, I feel my personal experience is consistent with what we found in the paper above.

Change … in general

With my recent move to the University of Alabama from the University of Florida, I starting thinking about the topic of change. In the context of quantitative investing, I started thinking about changes in basic rules that we take for given when investing quantitatively and what we can do about them.

Here’s an example – academics have long taught the CAPM, a model that predicts that companies with higher systematic risk (risk stemming from overall market conditions) should outperform companies with lower systematic risk. This makes intuitive sense… riskier companies, especially companies with higher risk exposure to overall economic conditions *should* give higher returns, on average. The empirical evidence in the 70s and 80s (in hindsight) was mixed, but in general we accepted this wisdom.

However, starting in the 1990s, we started to question this basic assumption. Fama French 1992, Table II documented this:

betaret

Companies are sorted by “pre-ranking betas” (simply the beta estimated using data from before the period we measure the returns) and the average monthly returns for the next year, by pre-beta deciles are presented. 1A and 1B are the 0-5% and 5-10% comapnies by pre-rnaking beta, 2-9 are the 2nd through 9th decile by pre-ranking beta, 10A and 10B are the 90-95% and 95-100% of firms by preranking beta.

Fama and French wrote, “the beta-sorted portfolios do not support the [CAPM]. … There is no obvious relation between beta and average returns.” [disclaimer: to my eyes, if I squint hard enough, I can sort of see a slight increase in average returns with beta, but the magnitude and monotonicity of the effect are both questionable.]

And so the decline of empirical belief in the CAPM started, and today, there is little faith that stocks with high market betas will outperform the market (although the CAPM is still widely taught). (see, for example “Is Beta Dead” from the Appendix of a popular Finance Textbook. )

In fact, the academic literature has made something of a 180 on this topic. The new hot anomaly is “low vol,” or “low beta.” This literature around this anomaly shows that the low volatility/low beta stocks actually outperform high volatility/high beta stocks and proposes several stories as to why this might be the case. If something so firmly grounded in theory can experience so complete a change, I think it’s a cautionary tale for *all* quantitative strategies … all things (including both the CAPM Beta and my time at Florida), run their course eventually.

 

 

 

Momentum Across Anomalies

In a new academic piece, we examine whether anomalies themselves exhibit momentum. Momentum in the context of investing refers to the idea that stocks that have done well recently continue to do well and stocks that have done poorly recently continue to do poorly. The momentum anomaly in stocks was widely publicized by Jagadeesh and Titman in their 1993 paper titled, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.”

We find this same idea holds for anomalies themselves. Examining 13 anomalies, we find anomalies that have performed well recently (in the last month), continue to do well next month. Anomalies that have been performing poorly recently continue to experience poor performance going forward. A chart makes this clear:

anomom

The chart documents the evolution of $10,000 invested in one of three strategies. The top line is a strategy that invests each month in the top half of the 13 anomalies (7 since investing in 6½ anomalies is hard) being analyzed, based on the 13 anomalies’ performance in the previous month. So, for example, if the value, momentum, size, profitability, accruals, investment level, and O-score anomalies did better than the other 6 anomalies we analyzed\ last month, the strategy would equally invest in these 7 anomalies. The bottom line does the opposite, investing in the bottom 6 anomalies and the middle line equally invests in all 13 anomalies across the entire period.

From the chart, it is clear that anomalies themselves exhibit momentum and the result is robust to the usual battery of academic tests. From a practitioner’s perspective, the implication is clear: if you’re interested in smart beta type investing, pick a strategy (or strategies) that has been doing well recently. From an academic’s perspective, the more interesting question is why. If you’re interested in our take on reasons for this, you can read our paper for some reasons we think we observe this momentum across anomalies.

 

The sky is falling … [but I’m not short]

With the Dow and other indices at record levels, there is no shortage of pundits out there warning of an impending correction, or worse. See, for example, http://money.cnn.com/2016/04/15/investing/stock-market-donald-trump-ted-cruz/ and http://thesovereigninvestor.com/exclusives/80-stock-market-crash-to-strike-in-2016/ .

Some of the fear mongers, unfortunately, have an ulterior motive for predicting doom and gloom. A number of advisers look at such predictions as free options – if there is a crash, they can sagely point to their warnings and say, “See, I called it.” Some can even monetize their call… raising funds as investors look to re-allocate their decimated portfolios to stem the bleeding. If there is no crash, well, no one will look back and call them out on their incorrect call.

To me, the real question is whether these prognosticators have their money where their mouth is. If they think a major crash is coming, are they short? Or at least in cash? If not, I find their warnings have little credibility. They might be right, they might be wrong – either way, they’re not betting their own money on the call.

As the famous saying from Paul Samuelson goes, Economists (using technical indicators), “predicted nine of the last five recessions.” It wouldn’t surprise me if unscrupulous financial advisers and nay-saying pundits predicted ninety.

Blending factors…. the problem with intersections

I have recently been working on an academic paper on using multiple factors to invest. This is a marked departure from most of my other academic work (which generally involves hedge fund data). This research is also directly relevant to my investing work. While the research itself is still ongoing and I am not ready to share the conclusions, I had a couple of insights on how difficult it is to combine factors that I’d share.

The technique I’ve been using to combine factors is looking for intersections – I believe value stocks outperform growth stocks and past winners outperform losers. I want to buy stocks that are both value stocks AND past winners. (Incidentally, there is a rigorous academic paper arguing for this exact factor combination by the managers of one of the most successful quantitative investing shops).

This works well… to an extent. The more factors you add, the fewer stocks will get through. As an example, if you wanted the top 10% of stocks by value (say P/E ratios) and the top 10% of stocks by past returns, and the two were uncorrelated, your filter would return about 1% of stocks in the universe. Adding a 3rd uncorrelated factor, say size (small cap firms generally outperform larger ones), would reduce the filtered stocks even further to about 0.1% of stocks in the universe.

Beyond 3 factors, it is impossible to use intersections to combine factors. The resulting sample size is simply too small. One could (and I have) relaxed the constraints on each individual constraint, and in that manner blend more factors, but this feels artificial and might even be to the detriment of the screen.

To use a sports analogy (and since the NBA championships are on), I could ask for the top 10% of 3 point shooters and top 10% of overall point scorers and I’d probably get Stephen Curry and a few others. If I then add top 10% of assists to my criteria, I probably won’t have a single player in the league fitting the bill. IF I then relax my criteria to be the top 30% of 3 point shooters, overall point scorers and assists, I’d probably get players in there again, but it’s unclear I’d like them over my original 2 factor criteria that returned Curry and co.

So intersections are tough to work with.

The January Effect

The January Effect refers to the observation that stock returns appear much higher in January than other months. There’s a beautiful wiki on the effect ( https://en.wikipedia.org/wiki/January_effect ) so I won’t go into much detail on it, except to say that:

1. The first thing that came to my mind regarding this effect is taxes – people sell in December to harvest tax losses/gains and then rebuy in January

2. From my understanding, this effect may no longer exist – papers come out both ways on this with recent data.

3. Here’s what Equities Lab has to say about the issue.I ran three backtests – all used data from Jan 1 2000 to today, had a monthly rebalance over all stocks (including illiquid small stocks) – the first does all months, the second does all months except January and the third does January only.

allstocksjan

nonjan

janonly

To compare the three, we can normalize everything to 12 month returns –

All months annual returns = 1.0055^12 –> 6.8%

Non January returns = (1.0048^(12/11))^12 –> 6.5%

January only returns = (1.0007^(12/1))^12 –> 10.6%

In light of this evidence, I think there’s a case to be made that some version of the January Effect still exists.