We attended a hedge fund luncheon the other day giving a historical overview of artificial intelligence (AI) and machine learning and how that particular hedge fund is implementing it into their processes. They showed the steady progression of artificial intelligence and its most recent application of machine learning whereby a machine (read: computer) can learn on its own without being explicitly programmed, culminating in the ability of Google’s AlphaZero to teach itself the game of chess and beat another AI engine (stockfish). Their clear view is that AI is the future in the investment world, and dutifully go about having their machines try and figure out ways to analyze past price movements and patterns to add some alpha.
But there’s a growing chorus of people who aren’t so sure that we’re approaching this problem correctly – among them Ben Hunt over at Epsilon Theory with his excellent post: We’re Doing it Wrong. You should just go over and read the entire post he has on this, but here’s some highlights:
I think investment professionals, quant and non-quant alike, are misusing the massive computing power that each and every one of us has at our fingertips. Whether it’s the powerful computer that we call a smartphone, whether it’s the crazy powerful multi-threaded computer that we call a laptop, whether it’s the insanely powerful computing utility we call AWS or Azure or the like … we’re using machine computing processes as an extension of our human computing processes.
[We’re thinking]… I see why we want this artificial intelligence system, it’s the next level. It’s the Giant Brain, replacing the Big Brain of all those computers that DE Shaw and Two Sigma and RenTech are using to figure out markets and mint money, which replaced the Little Brain of us humans scurrying around in the pits. AI is going to pierce through all the noise and find us the signal. It’s going to identify the pattern. It’s going to tell us the answer.
We think of markets as a clockwork machine, as an intricate collection of gears upon gears. We believe that if only we examine the clockwork closely enough, we can identify some hidden gear or unbeknownst gear movement that will let us predict the clockwork’s movement and make a lot of money.
[But]…the market is not a clockwork machine. The market is a bonfire.
What an unbelievably good description of the “market”, as if there were only one. A bonfire is man-made, seemingly under control (within some wide bounds), but undeniably wild and unpredictable when you’re down there close to the flames. We can’t wait to bring up this line in our next due diligence meeting…. “You say you use artificial intelligence to identify profitable trading patterns. Could you use the same AI to tell us when and where the next “pop” or “crackle” in a bonfire would be”. But this also feels a little like something that can get you ex-communicated from the world of quants and algorithmic trading strategies. The world of trading models is based on coming up with an explanatory model of the clockwork machine and testing that model on past versions of the clock (previous data). Surely, we can’t say there is no value to that? That there’s no place for a backtest?
The takeaway for us is to analyze managers using machine learning in a new light. Are you doing brute force type machine learnings like the early chess beating computers where you program into the machine how to play chess and it crunches through millions of computations to calculate the most probably outcome given a certain move (trade)? There’s value to that – but really, it’s just operational efficiency (replacing an army of 100s of analysts with a computer). Or are you setting the computer loose to do its own learning and find new sources of alpha? That sure sounds scary and something that a risk manager would almost never be comfortable with, not knowing what exactly that end game looks like. What if the machine thinks it should do credit default swaps when you’re an exchange traded futures manager? What if it learns that the only way to win is not to play (a la War Games in 1983)?
The answers, of course, are in the filters and defining what and how the computer runs. But that’s the catch 22 Mr. Hunt is getting to. When we add filters and human elements, we’re lessening the impact of the AI driven approach. In short, to make sure we don’t create planet killing Skynet, we may never have an AI created Mozart. In investing terms – to make sure we don’t create a portfolio killing AI strategy, we may never get an AI created Warrant Buffet.