19 March 2015

What to know about high-speed trading before the next market disaster strikes

By

Ask people on the street what mental image they associate with the words “stock exchange,” and you’ll likely hear about a large imposing building in the middle of New York or Chicago. Inside the building there is a huge space crowded with traders in multicolored jackets screaming and gesticulating to each other.

Until ten years ago, that would have been a pretty accurate description of a stock exchange. Today, however, almost all trading is done by algorithms firing digital commands traveling near the speed of light to rows upon rows of computer servers sitting in nondescript suburban warehouses.

The transition from human to electronic trading came with the promise of using faster and cheaper technology to drastically lower the costs of trading shares and to make it much easier to determine the most up-to-date prices for all market participants (commonly known as price discovery).

Certainly, for investors who want to buy or sell one hundred shares or a couple of futures contracts, the promise of automation seems to have been realized. They can now trade at lower transaction costs, connect to more buyers or sellers and take advantage of prices that can be discovered around the clock.

But with all that speed, automation and complexity comes the risk that a string of problematic ones and zeros could cause a market meltdown, even if only a temporary one. As both computing power and communication speed continue to grow, the intensity of these disruptive events will only increase as well, making it essential to diagnose the root causes and craft safeguards that prevent or mitigate them.

By far the biggest such incident occurred on May 6, 2010, when markets for stocks and derivatives collapsed and rebounded with extraordinary velocity. The Dow Jones Industrial Average declined about 1,000 points, losing 9% of its value in a matter of minutes, the biggest same-day drop in its history, then suddenly recovered its losses just as quickly.

Because these dramatic events unfolded so fast and with so much fury, what happened that day has become known as the “Flash Crash.” The crash was akin to an accident at a nuclear power plant: a massive release of energy over a short period of time, followed by blackouts across the whole power grid.

In the aftermath of the Flash Crash, the public became fascinated with the blend of high-powered technology and hyperactive market activity known as high frequency trading (HFT). To many investors and market commentators, high frequency trading has become the root cause of the unfairness and fragility of automated markets.

Within hours of the Flash Crash, my colleagues and I began conducting an empirical analysis of trading several days before May 6, 2010 and during the day itself.

We found that the Flash Crash was triggered by a massive automated sell program in the stock index futures market.

We also established that high frequency traders – algorithms that trade very quickly but do not accumulate large positions – did not cause the Flash Crash. They did, however, contribute to extraordinary market volatility experienced that day. We also showed how HFT can contribute to flash-crash-type events by exploiting short-lived imbalances in market conditions.

So, technology enables trading strategies that can lead to flash-crash type events. But perhaps with time, markets themselves will self-correct and become more resilient.

One well established way to achieve market resiliency is through greater competition. If there are more and more participants using HFT, then soon enough they will start competing for providing services rather than looking for ways to take advantage of slower traders.

My colleagues and I wanted to find out if this is actually happening. We carefully looked into the inner workings of the HFT industry over two years. We found that it was dominated by an oligopoly of fast and aggressive traders who somehow persistently manage to earn high and persistent returns while taking little risk. link text

How did this environment persist? For some reason, competitive market forces were unable to break up the oligopoly, and the benefits of automated markets were not being fully realized by all market participants. Instead of competing to provide the best execution to customers, incumbent high frequency traders seemed to be engaged in a winner-takes-all arms race for small reductions in latency or the amount of time it takes for a trading platform to respond to a command.

We decided to study latency in much more detail. Latency or the gap between the issuance of a command and its execution is present in all sufficiently complex mechanical or automated systems. What we wanted to look at was automated trading platforms – where a one-millisecond delay can translate into millions of dollars.

In a recently completed study, we measured the latency of a sophisticated automated trading platform and found that the amount of time it takes for a given trade request to process can vary wildly from one command to the next.

Sometimes, an exchange takes a few milliseconds to respond to a command to post or cancel an order. At other times it may take several seconds. Perhaps that’s where the advantage of high frequency traders comes from. If they can predict latency, then they can effectively predict what other market participants will do.

To visualize this, imagine one of those slow motion action sequences in which an action figure quickly disables a large crowd of adversaries. By being able to move faster than the adversaries and anticipate their moves, the action figure wins every battle.

How can market participants react to the presence of such action figures?

Well, as in the movies, they all advance or retreat together as soon as they can move. This can mean that some trading algorithms overreact or underreact to changes in market conditions. Effects of this sort, if any, should show up in prices, especially in volatility, a measure of how jumpy prices are.

So, we examined the relationship between trading platform latency and the volatility of asset prices. We found that latency, and especially the uncertainty about latency called jitter, can predict the volatility of asset prices. That is, the greater and more uncertain the delay, the more volatile the asset, which, of course, is great for high frequency traders, who make more money when prices are moving about more.

Following the Flash Crash of 2010, government regulators around the world came up with a variety of measures to address the issues inherent in automated trading. Most of these measures in one way or another propose to adjust latency – to “slow things down” or to remove the “speed advantage” of HFT.

However, in dealing with automated markets, we must use science to craft responses that address the root causes of violent market incidents without eliminating longer term advantages of technological innovation. If applied without a solid understanding of the effects of latency on the price discovery process, these knee-jerk government proposals could possibly result in extra costs and risks to the very participants they are designed to protect.

Instead of hastily crafted regulations, we recommend three measures.

First, introduce latency transparency. Trading platforms should begin to report characteristics of the time gap between trades being requested and executed to market participants on an ongoing basis so that any valuable information contained in latency can be discovered directly along with asset prices. The markets will then do what they do best – quickly incorporate information about latency into their algorithmic trading decisions and, thus, market prices.

Second, introduce derivatives – which are contracts whose value derive from the prices of either a real asset such as a wheat harvest or a financial asset such as a government bond – to trade latency risk. If volatility can be traded, why not latency? That would help manage the risks associated with latency by allowing an investor to pay a price to shift them to a third party just like it is being done now with wheat futures or interest swaps.

Third, design more pre-trade safeguards that briefly pause trading for everyone if markets start moving too quickly. In fact, that’s exactly what happened on May 6, 2010, in the stock index futures market. A five-second trading pause built deep into the trading platform forced all algorithms to reset their clocks leading to the restoration of order in the market. But by the time this the trading pause kicked in, the chain reaction had already began. If we can design these pre-trade pauses to kick in well before prices move down 1,000 points, we will all be better off.

Overall, we as scientists need to measure, study and share with the public what’s really going on in fast automated markets. We need to set aside our old notions of how trading used to be done a mere decade ago and come up with a solid evidence-based understanding of how price discovery really works at extremely small scales.

This knowledge is critical for designing appropriate safeguards that would protect against the massive short-lived releases of energy such as the Flash Crash, while allowing all market participants to benefit from the positive aspects of automation over the long term. I believe that getting a handle on latency and its jitter is a way to get us there.

This article was originally published on The Conversation. Read the original article.