November 21, 2024

Did a denial-of-service attack cause the stock-market "flash crash?"

On May 6, 2010, the stock market experienced a “flash crash”; the Dow plunged 998 points (most of which was in just a few minutes) before (mostly) recovering. Nobody was quite sure what caused it. An interesting theory from Nanex.com, based on extensive analysis of the actual electronic stock-quote traffic in the markets that day and other days, is that the flash crash was caused (perhaps inadvertently) by a kind of denial-of-service attack by a market participant. They write,

While analyzing HFT (High Frequency Trading) quote counts, we were shocked to find cases where one exchange was sending an extremely high number of quotes for one stock in a single second: as high as 5,000 quotes in 1 second! During May 6, there were hundreds of times that a single stock had over 1,000 quotes from one exchange in a single second. Even more disturbing, there doesn’t seem to be any economic justification for this.

They call this practice “quote stuffing”, and they present detailed graphs and statistics to back up their claim.

The consequence of “quote stuffing” is that prices on the New York Stock Exchange (NYSE), which bore the brunt of this bogus quote traffic, lagged behind prices on other exchanges. Thus, when the market started dropping, quotes on the NYSE were higher than on other exchanges, which caused a huge amount of inter-exchange arbitrage, perhaps exacerbating the crash.

Why would someone want to do quote stuffing? The authors write,

After thoughtful analysis, we can only think of one [reason]. Competition between HFT systems today has reached the point where microseconds matter. Any edge one has to process information faster than a competitor makes all the difference in this game. If you could generate a large number of quotes that your competitors have to process, but you can ignore since you generated them, you gain valuable processing time. This is an extremely disturbing development, because as more HFT systems start doing this, it is only a matter of time before quote-stuffing shuts down the entire market from congestion.

The authors propose a “50ms quote expiration rule” that they claim would eliminate quote-stuffing.

I am not an expert on finance, so I cannot completely evaluate whether this article makes sense. Perhaps it is in the category of “interesting if true, and interesting anyway”.

Comments

  1. Thus, when the market started dropping, quotes on the NYSE were higher than on other exchanges, which caused a huge amount of inter-exchange arbitrage, perhaps exacerbating the crash.

    There’s one possible reason. If a trading program can cause an imbalance between exchanges, it can then arbitrage that imbalance for a profit. If I were investigating this I’d look at any connection between the source of the DoS and any unusual arbitrage trading activity.

  2. Jason Remillard says

    The hyperactivity on the HFT systems is not news, and yes there are folks who focus primarily on that market ‘space’ and other hybrid models.

    I would suspect in this case, you can’t be selective in what you take, what you don’t as suggested above.

    The concept of putting orders and then pulling them 50ms later is pretty crazy, and yes has nothing to do with humans. I can think of other businesses (like domains, etc.) that have allowed this in the past (tasting domains, etc.) only to claw back on it.

    Tough problem

    • Curt Sampson says

      Jason Remillard writes, “The concept of putting orders and then pulling them 50ms later is pretty crazy….”

      No, it’s actually as crazy as it might first appear. Let me give you the perspective of someone who’s written a high-frequency trading system.

      Anybody entering non-matching quotes (i.e., bids or offers at a specific price that’s not going to be immediately accepted because it matches an existing counterpart bid or offer) into a market is doing two things: 1. providing liquidity to the market, which is in general a good thing for many reasons I won’t get into here, and 2. putting himself at a disadvantage compared to other market participants, becuase he’s giving the other participants the choice of whether or not to accept (trade against) that quote.

      For someone entering such quotes, the disadvantage is offset by quoting a worse price than the participant might be willing to take otherwise and by being “first in line” (or earlier in line) should a counterparty who finds that price acceptable come along. That’s how dealers (I use this term in the technical sense of a market participant exhibiting this behaviour) make their money.

      Now if I, having decided to become a “dealer,” write my program and hook it up to the market, I have to deal with a certain amount of risk, and that risk will be reflected in the prices I offer. (The less risk I feel I’m taking on, the better prices I’ll offer to other market participants.) Part of my risk assessment has to do with how fast I can cancel orders: if I can cancel one of my current quotes in 50ms rather than 100ms, that clearly reduces the risk I’m taking. That allows me to offer better prices. (I don’t have to offer better prices, but I’m going to, because I have competitors whom I want either to beat or at least not lose against.)

      So how does this play out? It means that when the manager of your pension fund decides he wants to go and buy or sell a bunch of whatever I’m trading (often for reasons entirely unrelated to what I’m doing, or even the market in which I’m working), he gets a better price from me, your pension fund is more profitable than it would be otherwise, and you have lower pension payments next year.

      (Yes, I know that they went up instead, because some asshole somewhere else in the system totally burned your pension fund manager in a completely different deal. My point here is that my program being able to enter orders and cancellations more quickly is not the cause of this.)

      Now, if you want to argue that there are good safety reasons for limiting order flow rates and increasing latency and so on, that’s an argument I’m certainly willing to consider quite carefully. (I was a contributor to comp.risks long before I got into any financial systems.) But do realize, there is a cost to be paid for this, and the decision is about whether that cost is worthwhile. So far, faster and more automated trading has offered some pretty good benefits for the common guy on the street, though; I would far rather be a small trader now than twenty years ago.

  3. Preston L. Bannister says

    There are a couple standard techniques that apply here. Too much hyperactivity? Slow it down. Too many coming in to handle? Throw some away.

    Make it a market rule – queue up requests, and respond after a delay. If too many requests come from the same source, throw away all but the latest. Make the delay on a human time scale – at least a second. A minute or more might be even better. If repeated requests come from the same source, keep only the last.

    There is no human purpose met by transactions on ridiculously short timescales.

    • You need more than just that rule — if I read things right, the actual information content of the packets here was irrelevant, and the whole thing was about a denial-of-service attack on competitors’ computers. I’d suggest mandatory implementation of back-off protocols that would forbid other traders from sending messages, but you know how long it would be before someone figured out how to game that.

      • Preston L. Bannister says

        When market rules make it unclear whether activity is legitimate or malicious, then you have a problem (which is one way of reading the study). When markets rules make a denial-of-service attack easy to spot, it is easier to defeat the attack efficiently at a very low level.

        Slowing things down is enough. Too many requests – and they get thrown away. WAY too many requests and you block at the boundary (just as ISPs deal with DOS attacks now).

    • Bryan Feir says

      I am suddenly reminded of a congestion control method called VirtualClock that was a big idea (back in 1990 or so) for asynchronous transfer mode-style networks. Basic idea is that every packet stream gets its own ‘virtual clock’, and an assigned interval based on the inverse of its assigned bandwidth. Any new packet gets assigned a virtual clock time of either the real clock time, or the previous packet’s virtual clock time plus the interval, whichever is later. Then packets are sorted in priority by clock time, earliest first.

      The net result is that any packet stream which goes above its bandwidth allocation quickly drops its own priority to below every other stream; and as long as the bandwidth isn’t overbooked, any well-behaved streams will always get through.

  4. Anonymous says

    Your link to Nanex has a space where a slash should be.

  5. In a sensible world, this is the kind of behavior that would get the perp barred from the securities industry for life after disgorging all their profits. Knowingly interfering with the operation of the market by producing bogus information is the kind of thing that the SEC used to take a very dim view of. But instead most people will look on it as an important technical innovation.

    And it sounds as if the proposed rule would prevent this particular bad action, but might well not prevent other hacks based on the same principle — which might have even more unforeseen consequences.