Machine learning and deep data analytics are becoming a must-have component for asset managers and even family offices, with firms like BlackRock, the world’s biggest asset manager in terms of managed assets, investing heavily in this technology. But it’s not all about the big guys, start-ups are also pushing the technology frontier in asset management.
Fintech start-up AlgoDynamix is working with a number of asset managers, hedge funds and family offices, in artificial intelligence applications for portfolio construction and risk management. The Cambridge, UK-based group specialises in identifying endogenous portfolio risk. AlgoDynamix uses deep data algorithms to scan the real-time behaviour of buyers and sellers to identify anomalous clusters of behaviour.
The company says it can spot looming tail risk about 10 or 12 hours before a portfolio implodes. “We look for the lethal stuff that kills your portfolio,” says Jeremy Sosabowski, CEO & co-founder of AlgoDynamix. “This is internally generated risk. It’s the panic. It’s not the external stuff. It’s not the Swiss central bank de-pegging their currency.”
To illustrate unpredictable endogenous risk, Sosabowski uses the example of the Millennium Bridge, which was built over the Thames in London and opened in 2000 with a defective design. At first designers and engineers assumed people would walk randomly on the bridge. In fact pedestrians crossing a bridge that has lateral sway have an unconscious tendency to match their footsteps to the sway, exacerbating it. The bridge was eventually closed and re-designed.
Sosabowski says similar assumptions are made about markets behaving in a random fashion; the misapprehension that financial data behaves something like Brownian Motion – or, randomly. “I’m not a big fan of Brownian Motion or statistical distributions. Nobody in Cambridge on our team believes that distribution or Brownian Motion or statistics is the right tool set for financial data.”
Identifying clusters of uncharacteristic buying and selling can be useful to asset managers and regulators, he says. Right before an M&A announcement a very large cluster of structured data might appear. Sometimes this can also herald a negative impact on share prices. A good example was a rush to sell shares in the UK security group G4S by its bosses, just before the announcement that the company had botched the security for the 2012 Olympics in London. This tiny historical pattern of insider selling by G4S executives was identified by machine learning algorithms.
Sosabowski says these sorts events are not one-off black swans. These are the grey swans typically occurring about 20 or 25 times a year. “When these sorts of things happen, correlation tends to spike, so you think you have a diversified portfolio, you think you are covered; but when you most need it, is when things actually stop working. That’s the painful reality.”
In contrast, Sylvain Champonnois, director of scientifically-driven active equity at BlackRock, is looking for new sources of alpha amid very large and novel datasets. Data consumption and production has evolved from the academic journals of the past, to behavioural indicators such as Facebook likes, which can be used to make accurate predictions about a person.
A year ago BlackRock ordered a subscription to a random sample of 1% of Twitter users. Champonnois says: “Twitter is amazingly rich so any kind of hot topic will be trending. It makes it very interesting to look at things like Brexit, the Greek crisis and so on.” He adds that BlackRock also analyses meta-text, hashtags and even icons and emoticons. Another example BlackRock likes is bulletin board activity in China.
“So you will have retail investors who will write a lot about Asian markets and especially China. If you trade in those markets there is a rich source of data. Bulletin boards seem to drive a lot of hype and excess volume tends to get associated to reversal, so we can construct some sort of reversal insight in those markets.”
But all this can overlook the fact that noisy data can be misleading. A good example, says Champonnois, was when Google declared that it was better at predicting flu epidemics than local health authorities in the US, based on searches for flu-related items, rather than using data from health authorities. However, in 2013 media coverage of the flu, which got people searching the topic, led to Google erroneously predicting an outbreak. “Our response is you need to have a scientific process – you can’t use this stuff as a pure black box,” says Champonnois.
He illustrates the point. “I manage a model that’s related to developed market firms. Every morning I get a sort of spreadsheet: each line is a company in that universe, and each column is an insight, which is a sort of component of returns. We bunch those insights into certain categories so some will be related to fundamentals, which is very traditional quant (analysis). But we will have another set of columns that’s related to sentiment. Then you also have some insight into macro.
“So for each of the stocks in the universe I know my view in terms of fundamentals, my view in terms of sentiment, my view in terms of macro-themes. At some point you combine those different insights into what is your best return forecast for each of those stocks.”