July 15, 2024

Security Lessons from the Big DDoS Attacks

Last week saw news of new Distributed Denial of Service (DDoS) attacks. These may be the largest DDoS attacks ever, peaking at about 300 Gbps (that is, 300 billion bits per second) of traffic aimed at the target but, notwithstanding some of the breathless news coverage, these attacks are not vastly larger than anything before. The attacks are news, but not big news.

The attacks were aimed at Spamhaus, which publishes lists of purported spammers. Unsurprisingly, the attackers appear to be associated with spamming—specifically, with Cyberbunker, which is accused of hosting spammers.

One interesting aspect of the attacks is the way they exploited externalities. “Externality” is an economics term. For our purposes, it describes a situation where a party could efficiently prevent harm to others—that is, a dollar’s worth of harm could be prevented by spending less than a dollar on prevention—but the harm is not prevented because the party has little or no incentive to prevent harm to strangers. Externalities are a common problem in security—they’re one of the reasons the market has trouble providing adequate security. The recent DDoS attacks exploited three separate externalities.

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this—they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus—which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but—you guessed it—the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

To review, the attackers used three tricks to amplify their traffic: exploiting bots, bouncing traffic off of open DNS proxies, and forging return addresses. Each trick exploited an externality.

The role of externalities in these attacks shouldn’t be too surprising. Attackers will strike where the defenses are weakest, and defenses are often weakest where the incentive to defend is lacking.

Can we eliminate these externalities? That’s not easy. For now, the main strategy is moral persuasion, asking people to tighten up their systems for the good of the community. That’s useful, but when hard choices have to be made, organizations will protect their own assets. And sometimes they won’t even know their infrastructure was involved in an attack.


  1. nathan glock says

    My 2 cents is that the software writers mfgs are culpable as well. The holes that allow botnets to be established and worms to get a foothold simply by visiting a webpage or opening an email is unacceptable. I do agree its a matter of incentives to fix.

  2. This whole “big DDoS” thing is a total scam. The “attack” was so small, nobody except them noticed, and had the company that “defended” them all not on their viral PR campaign, nobody else would have noticed or cared.

  3. Andrew McConachie says

    The best practice you mention regarding forged source IP addresses is codified in IETF BCP 38. To learn more about this Best Current Practice(BCP) visit here.

    MIT’s Spoofer project aims to map providers that fail to block source IP address spoofing on their networks. Download this software and run it.

    Until we start applying more pressure to providers who allow source IP address spoofing DDOSing will only get worse. Don’t do business with providers that fail to implement BCP 38. We need to identify them and publicly shame them into getting their act together. The MIT Spoofer project is a good place to start but they need more data, so download their client and run it.

  4. And just to make things a little more circular, Spamhaus is (ostensibly at least) an organization devoted to stamping out certain kinds of externality by making other organizations pay a price for generating spam (which costs time and money for the recipient but no so much for the sender.)

    But I might take issue with your characterization of externalities regarding bot-infected computers. If you add up all the time and resources required for individual owners of potentially bot-infected personal computers to keep their machines clean (possibly including getting them cleaned after initial infection) it’s not at all clear whether that number is less than or greater than the damage done by botnets.

    • Anonymous says

      I suspected but didn’t know and still don’t really understand am I in danger im so confused right now and this is the icing . Can someone break this down . I dont know who to trust