May 3, 2024

Comcast and Net Neutrality

The revelation that Comcast is degrading BitTorrent traffic has spawned many blog posts on how the Comcast incident bolsters the blogger’s position on net neutrality – whatever that position happens to be. Here is my contribution to the genre. Mine is different from all the others because … um … well … because my position on net neutrality is correct, that’s why.

Let’s start by looking at Comcast’s incentives. Besides being an ISP, Comcast is in the cable TV business. BitTorrent is an efficient way to deliver video content to large numbers of consumers – which makes BitTorrent a natural competitor to cable TV. BitTorrent isn’t a major rival yet, but it might plausibly develop into one. Which means that Comcast has an incentive to degrade BitTorrent’s performance and reliability, even when BitTorrent isn’t in any way straining Comcast’s network.

So why is Comcast degrading BitTorrent? Comcast won’t say. They won’t even admit what they’re doing, let alone offer a rationale for it, so we’re left to speculate. The technical details of Comcast’s blocking are only partially understood, but what we do know seems hard to square with claims that Comcast is using the most effective means to optimize some resource in their network.

Now pretend that you’re the net neutrality czar, with authority to punish ISPs for harmful interference with neutrality, and you have to decide whether to punish Comcast. You’re suspicious of Comcast, because you can see their incentive to bolster their cable-TV monopoly power, and because their actions don’t look like a good match for the legitimate network management goals that they claim motivate their behavior. But networks are complicated, and there are many things you don’t know about what’s happening inside Comcast’s network, so you can’t be sure they’re just trying to undermine BitTorrent. And of course it’s possible that they have mixed motives, needing to manage their network but choosing a method that had the extra bonus feature of hurting BitTorrent. You can ask them to justify their actions, but you can expect to get a lawyerly, self-serving answer, and to expend great effort separating truth from spin in that answer.

Are you confident that you, as net neutrality czar, would make the right decision? Are you confident that your successor as net neutrality czar, who would be chosen by the usual political process, would also make the right decision?

Even without a regulatory czar, wheels are turning to punish Comcast for what they’ve done. Customers are unhappy and are putting pressure on Comcast. If they deceived their customers, they’ll face lawsuits. We don’t know yet how things will come out, but it seems likely Comcast will regret their actions, and especially their lack of transparency.

All of which – surprise surprise – confirms my position on net neutrality: there is a risk of harmful behavior by ISPs, but writing and enforcing neutrality regulation is harder than it looks, and non-regulatory forces may constrain ISPs enough.

Comcast Blocks Some Traffic, Won't Explain Itself

Comcast’s apparent policy of blocking some BitTorrent traffic, which has been discussed on tech sites [example] for months, has now broken out into the mainstream press. Comcast is making things worse by refusing to talk plainly about what they are doing and why. (This is an improvement over Comcast’s previously reported denials, which now appear to be inconsistent with the facts.)

To the extent that Comcast has explained itself, its story seems to be that it is slowing traffic from heavy users in order to keep the network moving smoothly. This would be a reasonable thing for Comcast to do (if they were open about it) – but it’s not quite what they’re actually doing.

For starters, Comcast’s measures are not aimed at heavy users but rather at users of certain protocols such as BitTorrent. And not even all users of BitTorrent are targeted, but only those who use BitTorrent in a particular way: uploading a file to non-Comcast users while not simultaneously downloading parts of the same file. (In BitTorrent jargon, this is called “seeding”.) To get an idea of how odd this is, consider that an uploader who is experiencing blocking can apparently avoid the blocking by adding some download traffic.

It would likely be easier for Comcast to simply measure how much traffic each user is generating and drop the heaviest users’ packets, or just to discard packets at random (a tactic that falls most heavily on those who send and receive the most packets).

Beyond its choice of what to block, Comcast is using an unusual and nonstandard form of blocking.

There are well-established mechanisms for dealing with traffic congestion on the Internet. Networks are supposed to respond to congestion by dropping packets; endpoint computers notice that their packets are being dropped and respond by slowing their transmissions, thus relieving the congestion. The idea sounds simple, but getting the details right, so that the endpoints slow down just enough but not too much, and the network responds quickly to changes in traffic level but doesn’t overreact, required some very clever, subtle engineering.

What Comcast is doing instead is to cut off connections by sending forged TCP Reset packets to the endpoints. Reset packets are supposed to be used by one endpoint to tell the other endpoint that an unexplained, unrecoverable error has occurred and therefore communication cannot continue. Comcast’s equipment (apparently made by a company called Sandvine) seems to send both endpoints a Reset packet, purporting to come from the other endpoint, which causes both endpoints to break the connection. Doing this is a violation of the TCP protocol, which has at least two ill effects: it bypasses TCP’s well-engineered mechanisms for handling congestion, and it erodes the usefulness of Reset packets as true indicators of error.

People have apparently figured out already how to defeat this blocking, and presumably it won’t be long before BitTorrent clients incorporate anti-blocking measures.

It looks like Comcast is paying the price for trying to outsmart their customers.

Greetings, and a Thought on Net Neutrality

Hello again, FTT readers. You may remember me as a guest blogger here at FTT, writing about anti-circumvention, the print media’s superiority (or lack thereof) to Wikipedia, and a variety of other topics.

I’m happy to report that I’ve moved to Princeton to join the university’s Center for Information Technology Policy as its new associate director. Working with Ed and others here on campus, I’ll be helping bring the Center into its own as a leading interdisciplinary venue for research and conversation about the social and political impact of information technology.

Over the next few months, I’ll be traveling the country to look at how other institutions approach this area, in order to develop a strategic plan for Princeton’s involvement in the field. As a first step toward understanding the world of tech policy, I’ve been doing a lot of reading lately.

One great source is The Creation of the Media by Princeton’s own Paul Starr. It’s carefully argued and highly readable, and I’ve found its content challenging. Conversations in tech policy often seem to stem from the premise that in the interaction between technology and society, the most important causal arrow points from the technologies into the social sphere. “Remix culture”, perhaps the leading example at the moment, is a major cultural shift that is argued to stem from inherent properties of digital media, such as the identity between a copy and an original of a digital work.

But Paul argues that politics usually dominates the effects of technology, not the other way around. For example, although cheap printing technologies helped make the early United States one of the most literate countries of its time, Paul argues that America’s real advantage was its postal system. Congress not only invested heavily in the postal service, but also gave a special discounted rate to printed material, effectively subsidizing publications of all kinds. As a result much more printed material was mailed in America than in, say, British Columbia at the same time.

One fascinating observation from Paul’s book (pages 180-181 in the hardcover edition, for those following along at home) concerns the telegraph. In Britain, the telegraph was nationalized in order to ensure that private network operators didn’t take advantage of the natural monopoly that they enjoyed (“natural” since once there was one set of telegraph wires leading to a place, it became hard to justify building a second set).

In the United States, there was a vociferous debate about whether or not to nationalize the telegraph system, which was controlled by Western Union, a private company:

[W]ithin the United States, Western Union continued to dominate the telegraph industry after its triumph in 1866 but faced two constraints that limited its ability to exploit its market power. First, the postal telegraph movement created a political environment that was, to some extent, a functional substitute for government regulation. Britain’s nationalization of the telegraph was widely discussed in America. Worried that the US government might follow suit, Western Union’s leaders at various times extended service or held rates in check to keep public opposition within manageable levels. (Concern about the postal telegraph movement also led the company to provide members of Congress with free telegraph service — in effect, making the private telegraph a post office for officeholders.) Public opinion was critical in confining Western Union to its core business. In 1866 and again in 1881, the company was on the verge of trying to muscle the Associated Press aside and take over the wire service business itself when it drew back, apparently out of concern that it could lose the battle over nationalization by alienating the most influential newspapers in the country. Western Union did, however, move into the distribution of commercial news and in 1871 acquired majority control of Gold and Stock, a pioneering financial information company that developed the stock ticker.

This situation–a dynamic equilibrium in which a private party polices its own behavior in order to stave off the threat of government intervention–strikes me as closely analogous to the net neutrality debate today. Network operators, although not subject to neutrality requirements, are more reluctant to exercise the options for traffic discrimination that are formally open to them, because they recognize that doing so might lead to regulation.

Why Was Skype Offline?

Last week Skype, the popular, free Net telephony service, was unavailable for a day or two due to technical problems. Failures of big systems are always interesting and this is no exception.

We have only limited information about what went wrong. Skype said very little at first but is now opening up a little. Based on their description, it appears that the self-organization mechanism in Skype’s peer-to-peer network became unstable. Let’s unpack that to understand what it means, and what it can tell us about systems like this.

One of the surprising facts about big information systems is that the sheer scale of a system changes the engineering problems you face. When a system grows from small to large, the existing problems naturally get harder. But you also see entirely new problems that didn’t even exist at small scale – and, worse yet, this will happen again and again as your system keeps growing.

Skype uses a peer-to-peer organization, in which the traffic flows through ordinary users’ computers rather than being routed through a set of central servers managed by Skype itself. The advantage of exploiting users’ computers is that they’re available at no cost and, conveniently, there are more of them to exploit when there are more users requesting service. The disadvantage is that users’ computers tend to reboot or go offline more than dedicated servers would.

To deal with the ever-changing population of user computers, Skype has to use a clever self-organization algorithm that allows the machines to organize themselves without relying (more than a tiny bit) on a central authority. Self-organization has two goals: (1) the system must respond quickly to changed conditions to get back into a good configuration soon, and (2) the system must maintain stability as conditions change. These two goals aren’t entirely contradictory, but they are at least in tension. Responding quickly to changes makes it difficult to maintain stability, and the system must be engineered to make this tradeoff wisely in a wide range of conditions. Getting this right in a huge P2P system like Skype is tricky.

Which brings us to the story of last week’s failure, as described by Skype. On Tuesday August 14, Microsoft released a new set of patches to Windows, according to their normal monthly cycle. Many Windows machines downloaded the patch, installed it, and then rebooted. Each such machine would leave the Skype network when it shut down, then rejoin after booting. So the effect of Microsoft’s patch release was to increase the turnover in Skype’s network.

The result, Skype says, is that the network became unstable as the respond-quickly mechanism outran the maintain-stability mechanism; and the problem snowballed as the growing instability caused ever stronger (but poorly aimed) responses. The Skype service was essentially unavailable for a day or two starting on Thursday August 16, until the company could track down the problem and fix a code bug that it said contributed to the problem.

The biggest remaining mystery is why the problem took so long to develop. Microsoft issued the patch on Tuesday, and Skype didn’t get into deep trouble until Thursday. We can explain away some of the delay by noting that Windows machines might take up to a day to download the patch and reboot, but this still means it took Skype’s network at least a day to melt down. I’d love to know more about how this happened.

I would hesitate to draw too many broad conclusions from a single failure like this. Large systems of all kinds, whether centralized or P2P, must fight difficult stability problems. When a problem like this does occur, it’s a useful natural experiment in how large systems behave. I only hope Skype has more to say about what went wrong.

Inside Clouseau's Brain: Dissecting SafeMedia's Outlandish Technical Claims

I wrote in April about the over-the-top marketing claims of the “anti-piracy” company SafeMedia. (See Is SafeMedia a Parody?) The company’s marketing materials claim that its comically named product, “Clouseau,” can do what is provably impossible. Having both a professional and personal interest in how such claims come to be made, I wanted to learn more about how Clouseau actually worked. But the company, unsurprisingly, did not provide that information.

Now we have two more clues. First, SafeMedia founder Safwat Fahmy was actually invited to testify before a congressional hearing, where he provided written testimony. Second, I got hold of a white paper that SafeMedia salespeople are giving to prospective customers. Both documents give some technical information about Clouseau.

[CORRECTION (June 26): Mr. Fahmy was not actually invited to testify, and he did not appear before the committee, according to the committee’s own web site about the hearing. All he did was submit written testimony, which absolutely anyone is allowed to do. I was misled by a SafeMedia press release. I should have known better than to rely on those guys.]

The documents contradict each other in several ways. For example, Mr. Fahmy’s testimony says that Clouseau “detects and prohibits illegal P2P traffic while allowing the passage of legal P2P such as BitTorrent” (page 5). But the white paper says that BitTorrent is illegal and was blocked every time by Clouseau in their tests (page 6 and Appendix A).

Similarly, the white paper says, “In a series of tests conducted by us, Clouseau did not block any normal packets including web HTTP(S) and VPN (ipSec and PPTP).” (page 5) (HTTPS and VPN protocols are standard ways of using encryption to hide the content of communications.) But Mr. Fahmy’s congressional testimony says that “Clouseau is fully effective at forensically discriminating between legal and illegal P2P traffic with no false positives … whether encrypted or not” (page 7) which implies that it must block some HTTPS and VPN traffic.

One thing the documents seem to agree on is that Clouseau operates by trying to detect and block certain protocols, rather than looking at the material being transmitted. That is, it doesn’t look for infringing content but instead declares certain protocols to be illegitimate and then tries to block them. Which is a problematic design because many protocols are used for both infringing and noninfringing purposes. Some protocols, like BitTorrent see lots of noninfringing use and lots of infringing use. So Clouseau will get many cases wrong, whether it blocks BitTorrent or not – a problem the company apparently gets around by claiming to block BitTorrent and claiming not to block it.

How does the company square its protocol-blocking design with its claim to block illegal content with complete accuracy? Apparently they just redefine the term “illegal” to be co-extensive with the set of things their product blocks. In other words, the company’s legal claims seem to be just as implausible as its technical claims.

[UPDATE (Oct. 5, 2007): I hear rumors that SafeMedia is telling people that they offered me or my group access to a Clouseau device to study, but we refused. For the record, this is false.]