December 14, 2024

Trying to Make Sense of the Comcast / Level 3 Dispute

[Update: I gave a brief interview to Marketplace Tech Report]

The last 48 hours has given rise to a fascinating dispute between Level 3 (a major internet backbone provider) and Comcast (a major internet service retailer). The dispute involves both technical principles and fuzzy facts, so I am writing this post more as an attempt to sort out the details in collaboration with commenters than as a definitive guide. Before we get to the facts, let’s define some terms:

Internet Backbone Provider: These are companies, like Level 3, that transport the majority of the traffic at the core of the Internet. I say the “core” because they don’t typically provide connections to the general public, and they do the majority of their routing using the Border Gateway Protocol (BGP) and deliver traffic from one Autonomous System (AS) to another. Each backbone provider is its own AS, but so are Internet Service Retailers. Backbone providers will often agree to “settlement free peering with each other in which they deliver each others’ traffic for no fee.

Internet Service Retailers: These are companies that build the “last mile” of internet infrastructure to the general public and sell service. I’ve called them “Retailers” even though most people have traditionally called them Internet Service Providers (the ISP term can get confusing). Retailers sign up customers with the promise of connecting them to the backbone, and then sign “transit” agreements to pay the backbone providers for delivering the traffic that their customers request.

Content Delivery Networks: These are companies like Akamai that provide an enhanced service compared to backbone providers because they specialize in physically locating content closer to the edges (such that many copies of the content are stored in a part of the network that is closer to end-users). The benefit of this is that the content is theoretically faster and more reliable for end-users to access because it has to traverse less “hops.” CDNs will often sign agreements with Retailers to interconnect at many locations that are close to the end-users, and even to rent space to put their servers in the Retailer’s facilities (a practice called co-location).

Akamai and LimeLight Networks have traditionally provided delivery of Netflix content to Comcast customers as CDNs, and paid Comcast for local interconnection and colocation. Level 3, on the other hand, has a longstanding transit agreement with Comcast in which Comcast pays Level 3 to provide its customers with access to the internet backbone. Level 3 signed a deal with Netflix to become the primary provider of their content instead of the existing CDNs. Rather than change its business relationship with Comcast to something more akin to a CDN, in which it pays to locally interconnect and colocate, Level 3 hoped to continue to be paid by Comcast for providing backbone connectivity for its customers. Evidently, it thought that the current terms of its transit agreement with Comcast provided sufficient speed and reliability to satisfy Netflix. Comcast realized that they would simultaneously be losing the revenue from the existing CDNs that paid them for local services, and it would have to pay Level 3 more for backbone connectivity because more traffic would be traversing those links (apparently a whole lot). Comcast decided to try to instead charge Level 3, which didn’t sound like a good deal to Level 3. Level 3 published a press release saying Comcast was trying to unfairly leverage their exclusive control of end-users. Comcast sent a letter to the FCC saying that nothing unfair was going on and this was just a run-of-the-mill peering dispute. Level 3 replied that it was no such thing. [Updates: Comcast told the FCC that they they really do originate a lot of traffic and should be considered a backbone provider. Level 3 released their own FAQ, discussing the peering issue as well as the competitive issues. AT&T blogged in support of Comcast, Level 3 said that AT&T “missed the point completely.”]

Comcast’s attempt to describe the dispute as something akin to a peering dispute between backbone providers strikes me as misleading. Comcast is not a backbone provider that can deliver packets to an arbitrary location on the internet (a location that many other backbone providers might also be able to deliver to). Instead, Comcast is representing only its end-users, and it is doing so exclusively. What’s more, it has never had a settlement-free peering agreement with Level 3 (always transit, with Comcast paying). [Edit: see my clarification below in which I raise the possibility that it may have had both agreements at the same time, but relating to different traffic.] Indeed, the very nature of retail broadband service is that download quantity (or the traffic going into the Comcast AS) far exceeds upload quantity. In Comcast’s view of the world, therefore, all of their transit agreements should be reversed such that the backbone providers pay them for the privilege of reaching their users.

Why is this a problem? Won’t the market sort it out? First, the backbone market is still relatively competitive, and within that market I think that economic forces stand a reasonable chance of finding the optimal efficiency and leave relatively less room for anti-competitive shenanigans. However, these market dynamics can fall apart when you add to the mix last-mile providers. Last mile providers by their nature have at least a temporary monopoly on serving a given customer and often (in the case of a provider like Comcast) a local near-monopoly on high-performance broadband service altogether. Historically, the segmentation between the backbone market and the last-mile market has prevented shenanigans in the latter from seeping into the former. Two significant changes have occurred that alter this balance: 1) Comcast has grown to the size that it exerts tremendous power over a large portion of the broadband retail customers, with far less competition than in the past (for example the era of dial-up) and 2) Level 3 has sought to become the exclusive provider of certain desirable online content, but without the same network and business structure as traditional CDNs.

The market analysis becomes even more complicated in a scenario in which the last-mile provider has a vertically integrated service that competes with services being provided over the backbone provider with which it interconnects. Comcast’s basic video service clearly competes with Netflix and other internet video. In addition, Comcast’s TV Everywhere service (in partnership with HBO) competes with other computer-screen on-demand video services. Finally, the pending Comcst/NBCU merger (under review by the FCC and DoJ) implicates Hulu and a far greater degree of vertical integration with content providers. This means that in addition to its general incentives to price-squeeze backbone providers, Comcast clearly has incentive to discriminate against other online video providers (either by altering speed or by charging more than what a competitive market would yield).

But what do you all think? You may also find it worthwhile to slog through some of the traffic on the NANOG email list, starting roughly here.

[Edit: I ran across this fascinating blog post on the issue by Global Crossing, a backbone provider similar to Level 3.]

[Edit: Take a look at this fantastic overview of the situation in a blog post from Adam Rothschild.]

Join CITP in DC this Friday for "Emerging Threats to Online Trust"

Update – you can watch the video here.

Please join CITP this Friday from 9AM to 11AM for an event entitled “Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates.” The event will focus on the trustworthiness of the technical and policy structures that govern certificate-based browser security. It will include representatives from government, browser vendors, certificate authorities, academics, and hackers. For more information see:

http://citp.princeton.edu/events/emerging-threats-to-online-trust/

Several Freedom-to-Tinker posts have explored this set of issues:

HTC Willfully Violates the GPL in T-Mobile's New G2 Android Phone

[UPDATE (Oct 14, 2010): HTC has released the source code. Evidently 90-120 days was not in fact necessary, given that they managed to do it 7 days after the phone’s official release. It is possible that the considerable pressure from the media, modders, kernel copyright holders, and other kernel hackers contributed to the apparently accelerated release.]

[UPDATE (Nov 10, 2010): The phone has been permanently rooted.]

Last week, the hottest new Android-based phone arrived on the doorstep of thousands of expectant T-Mobile customers. What didn’t arrive with the G2 was the source code that runs the heart of the device — a customized Linux kernel. Android has been hailed as an open platform in the midst of other highly locked-down systems, but as it makes its way out of the Google source repository and into devices this vision has repeatedly hit speedbumps. Last year, I blogged about one such issue, and to their credit Google sorted out a solution. This has ultimately been to everyone’s benefit, because the modified versions of the OS have routinely enabled software applications that the stock versions haven’t supported (not to mention improved reliability and speed).

When the G2 arrived, modders were eager to get to work. First, they had to overcome one of the common hurdles to getting anything installed — the “jailbreak”. Although the core operating system is open source, phone manufacturers and carriers have placed artificial restrictions on the ability to modify the basic system files. The motivations for doing so are mixed, but the effect is that hackers have to “jailbreak” or “root” the phone — essentially obtain super-user permissions. In 2009, the Copyright Office explicitly permitted such efforts when they are done for the purpose of enabling third-party programs to run on a phone.

G2 owners were excited when it appeared that an existing rooting technique worked on the G2, but were dismayed when their efforts were reversed every time the phone rebooted. T-Mobile passed the buck to HTC, the phone manufacturer:

The HTC software implementation on the G2 stores some components in read-only memory as a security measure to prevent key operating system software from becoming corrupted and rendering the device inoperable. There is a small subset of highly technical users who may want to modify and re-engineer their devices at the code level, known as “rooting,” but a side effect of HTC’s security measure is that these modifications are temporary and cannot be saved to permanent memory. As a result the original code is restored.

As it turned out, the internal memory chip included an option to make certain portions of memory read-only, which had the effect of silently discarding all changes upon reboot. However, it appears that this can be changed by sending the right series of commands to the chip. This effectively moved the rooting efforts into the complex domain of hardware hacking, with modders trying to figure out how to send these commands. Doing so involves writing some very challenging code that interacts with the open-source Linux kernel. The hackers haven’t yet succeeded (although they still could), largely because they are working in the dark. The relevant details about how the Linux kernel has been modified by HTC have not been disclosed. Reportedly, the company is replying to email queries with the following:

Thank you for contacting HTC Technical Assistance Center. HTC will typically publish on developer.htc.com the Kernel open source code for recently released devices as soon as possible. HTC will normally publish this within 90 to 120 days. This time frame is within the requirements of the open source community.

Perhaps HTC (and T-Mobile, distributor of the phone) should review the actual contents of the GNU Public License (v2), which stipulate the legal requirements for modifying and redistributing Linux. They state that you may only distribute derivative code if you “[a]ccompany it with the complete corresponding machine-readable source code.” Notably, there is no mention of a “grace period” or the like.

The importance of redistributing source code in a timely fashion goes beyond enabling phone rooting. It is the foundation of the “copyleft” regime of software licensing that has led to the flourishing of the open source software ecosystem. If every useful modification required waiting 90 to 120 days to be built upon, it would have taken eons to get to where we are today. It’s one thing for a company to choose to pursue the closed-source model and to start from scratch, but it’s another thing for it to profit from the goodwill of the open source community while imposing arbitrary and illegal restrictions on the code.

NPR Gets it Wrong on the Rutgers Tragedy: Cyberbullying is Unique

On Saturday, NPR’s Weekend All Things Considered ran a story by Elizabeth Blair called “Public Humiliation: It’s Not The Web, It’s Us” [transcript]. The story purported to examine the phenomenon of internet-mediated public humiliation in the context of last weeks tragic suicide of Tyler Clementi, a Rutgers student who was secretly filmed having a sexual encounter in his dorm room. The video was redistributed online by his classmates who created it. The story is heartbreaking to many locals who have friends or family at Rutgers, especially to those of us in the technology policy community who are again reminded that so-called “cyberbullying” can be a life-or-death policy issue.

Thus, I was disappointed that the All Things Considered piece decided to view the issue through the lens of “public humiliation,” opening with a sampling of reality TV clips and the claim that they are significantly parallel to this past week’s tragedy. This is just not the case, for reasons that are widely known to people who study online bullying. Reality TV is about participants voluntarily choosing to expose themselves in an artificial environment, and cyberbullying is about victims being attacked against their will in the real world and in ways that reverberate even longer and more deeply than traditional bullying. If Elizabeth Blair or her editors had done the most basic survey of the literature or experts, this would have been clear.

The oddest choice of interviewees was Tavia Nyong’o, a professor of performance studies at New York University. I disagree with his claim that the TV show Glee has something significant to say about the topic, but more disturbing is his statement about what we should conclude from the event:

“[My students and I] were talking about the misleading perception, because there’s been so much advances in visibility, there’s no cost to coming out anymore. There’s a kind of equal opportunity for giving offense and for public hazing and for humiliating. We should all be able to deal with this now because we’re all equally comfortable in our own skins. Tragically, what Rutgers reveals is that we’re not all equally comfortable in our own skins.

I’m not sure if it’s as obvious to everyone else why this is absolutely backward, but I was shocked. What Rutgers reveals is, yet again, that new technologies can facilitate new and more creative ways of being cruel to each other. What Rutgers reveals is that although television may give us ways to examine the dynamics of privacy and humiliation, we have a zone of personal privacy that still matters deeply. What Rutgers tells us is that cyberbullying has introduced new dynamics into the way that young people develop their identities and deal with hateful antagonism. Nothing about Glee or reality TV tells us that we shouldn’t be horrified when someone secretly records and distributes video of our sexual encounters. I’m “comfortable in my own skin” but I would be mortified if my sexual exploits were broadcast online. Giving Nyong’o the benefit of the doubt, perhaps his quote was taken out of context, or perhaps he’s just coming from a culture at NYU that differs radically from the experience of somewhere like middle America, but I don’t see how Blair or her editors thought that this way of constructing the piece was justifiable.

The name of the All Things Considered piece was, “It’s Not The Web, It’s Us.” The reality is that it’s both. Humiliation and bullying would of course exist regardless of the technology, but new communications technologies change the balance. For instance, the Pew Internet & American Life Project has observed how digital technologies are uniquely invasive, persistent, and distributable. Pew has also pointed out (as have many other experts) that computer-mediated communications can often have the effect of disinhibition — making attackers comfortable with doing what they would otherwise never do in direct person-to-person contact. The solution may have more to do with us than the technology, but our solutions need to be informed by an understanding of how new technologies alter the dynamic.

New Search and Browsing Interface for the RECAP Archive

We have written in the past about RECAP, our project to help make federal court documents more easily accessible. We continue to upgrade the system, and we are eager for your feedback on a new set of functionality.

One of the most-requested RECAP features is a better web interface to the archive. Today we’re releasing an experimental system for searching and browsing, at archive.recapthelaw.org. There are also a couple of extra features that we’re eager to get feedback on. For example, you can subscribe to an RSS feed for any case in order to get updates when new documents are added to the archive. We’ve also included some basic tagging features that lets anybody add tags to any case. We’re sure that there will be bugs to be fixed or improvements that can be made. Please let us know.

The first version of the system was built by an enterprising team of students in Professor Ed Felten’s “Civic Technologies” course: Jen King, Brett Lullo, Sajid Mehmood, and Daniel Mattos Roberts. Dhruv Kapadia has done many of the subsequent updates. The links from the Recap Archive pages point to files on our gracious host, the Internet Archive.

See, for example, the RECAP Archive page for United States of America v. Arizona, State of, et al. This is the Arizona District Court case in which the judge last week issued an order granting injunction against several portions of the controversial immigration law. As you can see, some of the documents have a “Download” link that allows you to directly download the document from the Internet Archive, whereas others have a “Buy from PACER” link because no RECAP users have yet liberated the document.