something.com

Just finished my latest momentum test run over a calibration set of 3000. This calibration factor affects the order weighting only at the moment and not the sensitivity towards the purchase or sale decision. I might add a further factor to strengthen the value of this and complement this to the weighting as well. That being said, this has taken me a full day and half to run for this setting change alone so I better make sure that I measure twice before cutting once more.

Some of these results were already presented in past updates though these do cover the full period in question, including one last result from 2006 which is a bit of an outlier (a positive one though) over the results but still more consistent with the aggregate model.

It is, of course, too early to make any definite statement but technically speaking, assuming a random-walk of stock market prices, then it should not be possible to replicate the performance of such prices using purely historical data (a formalised null-hypothesis significance test will be included with the final release). However, the data, which covers for each data point four years of trading history and roughly 100 to 150 trades per window does seem to give a pretty correlated result at least up until 1990. This does not accord with a random walk of stock market prices.

From 1990 onwards, the algorithm starts working as directed and correctly hedges out some of the downside risk. This was an original intention but it is still getting swayed a little too much. The good news: it never loses as much or as often as the benchmark when looking across all periods. This isn’t all too surprising: the basic laws of risk and return are at play here from a stochastic point of view. Excluding that last result from 2006 we get a st.dev equal to half of the benchmark which complies with the expected loss of variance.

But even taking this into account, the correlation factor is still concerning. There does appear to be a significant impact from momentum on market prices. Again, though this might go against the random-walk thesis, it need not necessarily go against Efficient Market Theory for as much. An efficient market requires sufficient inefficiencies in order to motivate rational efficient profit-takers. In other words, momentum could actually be a yet another driver in helping efficient price discovery (or maybe not – can’t say just yet). Please note that this is still very early stages here and I do want to be careful in qualifying these statements before their final release.

Without further ado, please find below the result set for the (3k) calibration set of results. The results are presented with a P&L graph and a distribution graph for each reporting run: first, full period (1954 to 2010), then 1954 to 1990, and finally 1991 to today. Each graph can be clicked for a larger and more in depth version.

Oh and I’m thinking of renaming the algo to something like “4!-Moments”. I think it encapsulates the idea pretty well.

Full Period

3kPnLFullHistory
3k-distributionFullPeriod

1954 to 1990

3kPnL1954to1990
3k-distribution54to90

1991 to 2010


Comments are closed.

Disclaimer: Material posted on 24-something does not contain (and should not be construed as containing) personal financial or investment advice or other recommendations. The information provided does not take into account your particular investment objectives, financial situation or investment needs. You should assess whether the information provided is appropriate to your particular investment objectives, financial situation and investment needs. You should do this before making an investment decision based on the material above. You can either make this assessment yourself or seek the assistance of an independent financial advisor. 24-Something, associated parties and Tariq Scherer accept no responsibility for any use that may be made of these comments and for any consequences that result.