November 25, 2024

Reader Replies on Congestion and the Commons

Thanks to all of the readers who responded to my query about why the Internet’s congestion control mechanisms aren’t destroyed by selfish noncompliance. Due to the volume of responses, I can’t do all of you credit here, but I’ll do my best to summarize.

Jordan Lampe, Grant Henninger, and David Spalding point out that “Internet accelerator” utilizes (like these) do exist, but they don’t seem to come from mainstream vendors. Users may be leery of some of these products or some of these vendors. Wim Lewis suggests that these utilities may work by opening multiple connections to download material in parallel, which probably qualifies as a way of gaming the system to get more than one’s “fair share” of bandwidth.

Many readers argue that the incentive to cheat is less than I had suggested.

Aaron Swartz, Russell Borogove, and Kevin Marks argue that it’s not so easy to cheat the congestion control system. You can’t just transmit at full speed, since you don’t want to oversaturate any network links with your own traffic. Still, I think that it’s possible to get some extra bandwidth by backing off more slowly than normal, and by omitting certain polite features such as the so-called “slow start” rule.

Aaron Swartz, Wim Lewis, Mark Gritter, and Seth Finkelstein argue that most congestion happens at the endpoints of the Net: either at the link connecting directly to the server, or at the “last mile” link to the user’s desktop. These links are not really shared, since they exist only for the benefit of one party (the server or the user, respectively); so the bandwidth you gained by cheating would be stolen only from yourself.

Carl Witty and Karl-Friedrich Lenz argue that most of the relevant Net activity consists of downloads from big servers, so these server sites are the most likely candidates for cheating. Big servers have a business incentive to keep the Net running smoothly, so they are less likely to cheat.

Mark Gritter argues that most Net connections are short-lived and so don’t give congestion control much of a chance to operate, one way or the other.

All of these arguments imply that the incentive to cheat is not as large as I had suggested. Still, if the incentive is still nonzero, at least for some users, we would expect to see more cheating than we do.

Russell Borogove and John Gilmore argue that if cheating became prevalent, ISPs and backbone providers could deploy countermeasures to selectively drop cheaters’ packets, thereby lowering the benefit of cheating. This is plausible, but it doesn’t explain the apparent lack of cheating we see. The greedy strategy for users is to cheat now, and then stop cheating when ISPs start fighting back. But users don’t cheat much now.

Wim Lewis and Carl Witty suggest that if we’re looking for cheaters, we might look first at users who are already breaking or stretching the rules, such as porn sites or peer-to-peer systems.

Finally, Mark Gritter observes that defections do happen now, though in indirect ways. Some denial of service attacks operate by causing congestion, and some protocols related to streaming video or peer-to-peer queries appear to bend the rules. Perhaps the main vehicle for cheating will be through new protocols and services and not by modification of existing ones.

Thanks to all of you for an amazing demonstration of the collective mind of the Net at work.

Ultimately, I think there’s still a mystery here, though it’s smaller than I originally imagined.

Congestion Control and the Tragedy of the Commons

I have been puzzling lately over why the Internet’s congestion control mechanisms work. They are a brilliant bit of engineering, but they fail utterly to account for the incentives of the Internet’s users. By any rational analysis, they ought to fail spectacularly, causing the Net to grind to a halt. And yet, for some unfathomable reason, these mechanisms do work.

Let me explain. As a starting point, think about the cars on a busy highway. If there aren’t many cars, the road is underutilized, carrying only a fraction of its capacity. Add more cars, and the road is used more efficiently, carrying more cars per minute past any given point. Add too many cars, though, and you’ll cause a traffic jam. Traffic slows, and the road becomes much less efficient as only a few cars per minute manage to crawl past each point. The road is in congestion.

Now think of the Internet as a highway, and each packet of data on the Net as a car. Adding more traffic increases the Net’s throughput, but only up to a point. Adding too much traffic leads to congestion, with a rapid dropoff in efficiency. If too many people are sending too much data, the Net slows to a crawl.

To address this problem, the TCP protocol (upon which are built most of the popular Net services, including email and the web) includes a “congestion control” mechanism. The mechanism is subtle in its details but pretty simple in its basic concept. Whenever two computers are talking via TCP, and they detect possible congestion on the path between them, they slow down their conversation. If everybody does this, congestion is avoided, since the onset of congestion causes everybody to back off enough to stave off an Internet traffic jam.

If you back off in response to congestion, you’re making the Internet a better place. You’re accepting a slowdown in your communication, in order to make the Internet faster for everybody else.

This is a perfect Tragedy of the Commons setup. We’re all better off if everybody backs off. But backing off is voluntary, and we each have a selfish motive to skip the backoff and just grab as much bandwidth as we can.

The mystery is this: Why hasn’t the tragedy happened? Virtually everybody does back off, and the Net doesn’t collapse under congestion. This happens despite the fact that a Net inhabited by rationally self-interested people should apparently behave otherwise. What’s going on?

Nobody seems to have an adequate explanation. One theory is that the average person doesn’t know how to cheat; but others could make and sell products that offer better Net performance by not backing off. Another theory is that Microsoft supplies most of the Net’s software and is making the choice for most consumers; and Microsoft’s self-interest is in having a useful Net. But again, why don’t others show up selling add-on “booster” products that cheat? A third theory is that people really are altruistic on the Net, behaving in a more civil and community-minded fashion than they do in real life. That seems pretty unlikely.

I’m stumped. Do any of you have an explanation for this?

Standards vs. Regulation

The broadcast flag “debate” never ceases to amaze me. It’s a debate about technology, but in forum after forum the participants are all lawyers. And it takes place in a weird reality distortion field where certain technological non sequiturs pass for unchallenged truth.

One of these is that the broadcast flag is a technical “standard.” Even opponents of the flag have taken to using this term. As I have written before, there is a difference between standards and regulation, and the broadcast flag is clearly regulation.

For future reference, here is a handy chart you can use to distinguish standards from non-standards.

STANDARD NOT A STANDARD
written by engineers written by lawyers
voluntary mandatory
enables interoperation prevents interoperation
backed by technologists opposed by technologists

Simple, isn’t it?

UPDATE (March 7, 8:00 AM): On further reflection (brought on by the comments of readers, including Karl-Friedrich Lenz) I changed the table above. Originally the right-hand column said “regulation” but I now realize that goes too far.

Keeping Honest People Honest

At today’s House committee hearing on the broadcast flag, Fritz Attaway of the MPAA used a popular (and revealing) argument: the purpose of the broadcast flag is “to keep honest people honest.” This phrase is one of my pet peeves, since it reflects sloppy thinking about security.

The first problem with “keeping honest people honest” is that it’s an oxymoron. The very definition of an honest person is that they can be trusted even when nobody is checking up on them. Nothing needs to be done to keep honest people honest, just as nothing needs to be done to keep tall people tall.

The second problem is more substantial. To the extent that “keeping honest people honest” involves any analytical thinking, it reflectss a choice to build a weak but conspicuous security mechanism, so that people know when they are acting outside the system designer’s desires. (Mr. Attaway essentially made this argument at today’s hearing.) The strategy, in other words, is to put a “keep out” sign on a door, rather than locking it. This strategy indeed works, if people are honest.

But this is almost never the kind of security technology that the “keeping honest people honest” crowd is advocating. In my experience, you hear this phrase almost exclusively from advocates of big, complicated, intrusive, systems that have turned out to be much weaker than planned. Having failed to build a technologically strong system, they say with cheerful revisionism that their goal all along was just to “keep honest people honest.” Then they try to sell us their elaborate, clunky, expensive system.

The problem is that it’s cheap and easy to build a “keep out” sign. If that’s all you want – if all you want is to help honest people keep track of their obligations – then simple, noncoercive technology works fine. You don’t need a big, bureaucratic initiative like the broadcast flag if that’s your goal.

The funny thing here is that the MPAA is getting out in front of the curve. Usually vendors wait until their security technology has failed before they change their sales pitch to “keeping honest people honest.”

Lexmark Opinion Available

The Court’s opinion in the Lexmark case is now available. Here’s a summary. (Caveat: I’m inferring some of the technical details, since all I have is the Court’s summary of what the expert witnesses said; but I’m fairly confident that my inferences are correct.)

Toner cartridges for certain Lexmark printers contain small computer programs that tell the printer how much toner is left in the cartridge. The Lexmark printers use cryptographic means to “authenticate” the cartridge program; if the program doesn’t pass the cryptographic test, the printer refuses to work with it.

Static Control’s cartridge chip contains a verbatim copy of the Lexmark cartridge program, a program which is about fifty bytes in length. The Court found this small program to be copyrightable. The Court also found, as a factual matter, that Static Control could have figured out by further reverse engineering how to write a different program that passed the cryptographic test. (Lexmark did not challenge Static Control’s right to reverse engineer any of the Lexmark products.) The Court therefore found that Static Control’s redistribution of Lexmark’s cartridge program was copyright infringement.

The Court also ruled that Static Control’s program was a circumvention device under the DMCA, since (the Court said) it circumvented Lexmark’s cryptographic handshake. The Court actually found that the handshake controls access to both the cartridge program and the printer’s software, therefore finding a double DMCA violation.

If the Court’s factual findings are correct, the copyright portion of the ruling seems pretty straightforward.

The DMCA portion is another story. According to the Court, the Lexmark software implements the access control measure; but the Static Control software which is completely identical to the Lexmark software is improperly circumventing the measure. In other words, circumvention is determined not by what a device does, but by whether the maker of some complementary product has approved it.

The other slightly puzzling aspect of the Court’s DMCA analysis is the finding that the cryptographic handshake controls access (by the user) to the printer’s software. Whether or not a valid toner cartridge is inserted, the printer’s software runs, and it provides services to the user. Thus the user has access to the printer software no matter what; so it’s hard to see how anything is controlling access. True, the printer software behaves differently when a conforming cartridge is inserted, but it seems like a real stretch to say that this change in behavior constitutes “access” to the printer.

It will be interesting to see what happens next. Perhaps the copyright ruling will render the DMCA issues moot; or perhaps the Court’s DMCA reasoning will be subject to review at some point.