March 28, 2024

I Tell the FCC to End In-Home Video Encryption

In my last post, I asked “Who Killed the Open Set-Top-Box?.” There were some great comments on that post, which inspired me to write up my thoughts and send them to the FCC. The FCC has long tried and failed to mandate that cable companies make their systems more interoperable with third-party consumer devices. Nevertheless, […]

Who Killed the Open Set-Top-Box?

A few years ago, I lived in Cambridge, Massachusetts. I subscribed to Comcast cable. With my trusty Hauppauge WinTV-PVR-150 I enjoyed the ability to watch TV on my desktop computer — even to record it for later viewing or to occasionally edit and re-upload it to YouTube (with critical commentary and within the bounds of fair use of course, with nary a DMCA notice). Then I moved across the river, to Boston. When I had Comcast cable installed, they also installed a set-top box connected to my television, for tuning all of my channels. I plugged my PVR-150 into another cable connection and got almost no channels at all (including many of the channels I had gotten just across the river in Cambridge). Despite being a geek, it took me awhile to figure out what had happened. As it turns out, the Boston Comcast system technology was entirely independent of the Cambridge Comcast system, and Boston was ahead of the technology curve in adopting digital cable. My tuner card worked only with analog cable signals, so I set out to learn about digital tuners. As it turns out, the solution was not as simple as purchasing a digital equivalent of my PVR-150. Although such products exist, they can tune only un-encrypted digital TV signals… which are increasingly rare. The cable companies have implemented encryption to fight “theft” of channels that subscribers have not paid for. In the process, I lost the ability to view channels I had paid for on a device of my choosing. The cable set-top-box was the only device in my house that could tune to those channels.

The FCC saw this problem on the horizon all the way back in 1996, when it included requirements for compatibility of consumer-provided “navigation devices” in the 1996 Telecommunications Act (Sec. 304, adding Sec. 629 to the Communications Act, subsequently codified as 47 USC 549). The FCC ultimately settled on a solution called CableCard, which specified a universal standard for a small device that would perform the decryption and tuning functions of a full set-top-box, but in a form factor that could be inserted into third-party “host” devices. The problems with CableCard have been many-fold. To begin with, the long process of implementation (both technically and politically) failed to facilitate a competitive market for devices that are an alternative to set-top-boxes provided by the cable companies. Second, the functionality of CableCard was outdated almost as soon as it was designed. For instance, it did not anticipate two-way interactive services, program guides, or video-on-demand. Third, the standard mandated particular hardware, which contributed to the implementation delays and difficulties. This technology was also cable-specific, and even there it was bound to be obsoleted as more of the logic moved into software. For instance, the cable industry has increasingly shifted to new standards for delivering digital video which are not compatible even with CableCard-equipped devices. Finally, and perhaps most damning to the project, is the fact that all devices that wish to be CableCard “host” devices must first pass certification by the cableco-surrogate organization CableLabs. This means that cable companies still hold de facto control over the market.

Much of the concern over certification was related to “theft” of content, and as a result it was exceedingly difficult to become certified unless all components of your product (and the products it shared cable content with) were certified to prevent the user from many actions — including copying, editing, and redistribution. The result is that, in 2010, there are still very few CableCard-compatible PC devices, and all of them require running the locked-down Windows 7 operating system. What’s more, the push for control was extended even further into the home — encompassing the outputs of all CableCard-certified devices via a technology called HDCP. HDCP encrypts the digital video signal coming out of CableCard-certified devices so that only HDCP-certified devices can decrypt it. Ed pointed out in 2006 that HDCP is trivially easy to crack, so although it won’t stop a dedicated pirate (who could also just do this or this), it could nevertheless prevent a legitimate third-party video device market. Your flatscreen television must be HDCP certified, as must your flatscreen computer monitor, as must your third-party tuner or DVR. Hauppauge has not to date shipped a CableCard or HDCP certified device.

The FCC has struggled with how to address these shortcomings. In 2007, it realized that the industry players were not going to update CableCard to support two-way interactions, and issued rules intended to force it to do so. This ushered in the “true2way” standard at the heart of “CableCard 2.0.” It also ushered in yet another certification regime. This time, CableLabs imposed an even greater degree of restriction on the design of third-party devices, insisting that the manufacturers surrender control of major portions of the user interface to their own restrictive “middleware”. This undermines one of the competitive advantages of these devices (better user interface), and mires their developers in the process of revamping their products to meet these arbitrary “compatibility” requirements. Tivo has not to date shipped a CableCard 2.0 certified device.

In the absence of a better digital solution, the only way third-parties have been able to reclaim their ability to exercise independent in-home viewing, recording, and fair use rights has been to rely on the “analog hole” and some truly creative hacks to tune their devices. The “analog hole” refers to the fact that set-top-boxes are required to output analog “component” video in order to maintain compatibility with older devices. In order to perform navigation functions, people will use an “IR Blaster” which is literally taped to the front of the set-top-box and simulates the remote control. This is not workable for most consumers, but the FCC has started to compromise on even the analog hole option by permitting “selectable output control” in some cases, which turns off analog outputs during certain programming.

The next stage of this battle began when several public interest groups filed a petition asking for several fixes to the system, including a new standards-based video gateway. Consumer electronics manufacturers supported this effort, and TiVo sent a letter explaining the various ways in which CableCard through its various instantiations had failed to facilitate a competitive market (reversing decades of a vibrant market for VCRs and other “cable ready” devices). Recently, Google announced Google TV, a platform for integration of internet and cable content in set-top-boxes and televisions. They did not mention if Google TV devices would aim to use CableCard or some other method to get access to the cable content. Ironically, Google may be one of the few big players in this space that can deal with the overhead of the certification process, although the requirements of being certified may also work against the company’s general ethos of content openness.

In any case, the Commission released a Notice of Inquiry (see the docket here) asking several questions about a possible new approach to the CableCard issue, which would hopefully resolve some of the regime’s existing shortcomings. We should see much more activity on the docket as comments come due. At the center of the FCC’s NOI is the notion of an “AllVid” standard that defines a single set of protocols that a video provider’s equipment must support so that other devices can both control tuning/interaction/etc and receive a video stream for display. The “all” in AllVid refers to the fact that this standard is designed to be agnostic to the delivery technology (ie: Cable, Fiber, Satellite, etc.) This far in the proposal I am sold. However, the NOI assumes that this functionality must come from a new type of physical device:

25. AllVid Equipment. The AllVid equipment would be designed to operate specifically with one MVPD and offered through the MVPD’s preferred mechanism, whether leased or sold at retail, manufactured by one company or competitively. We foresee two possible physical configurations for the AllVid equipment. In the first configuration, the AllVid equipment would be a small “set-back” device, capable of communicating with one navigation device or TV set and providing at least two simultaneous video streams to allow for picture-in-picture and to allow subscribers to watch a program on one channel while recording a program on another channel. In the second configuration, the AllVid equipment would act as a whole-home gateway, capable of simultaneously communicating with multiple navigation devices within the home, and providing at least six simultaneous video streams within the home (which would allow picture-in-picture in three different rooms), possibly through a modular system that could accommodate more streams as necessary.

The NOI then goes on to note that the FCC may suggest Ethernet as the universal port, IP as the base protocol, and DTCP-IP (“Digital Transmission Content Protection over IP”) for encryption (it, too, is cryptographically weak). At this point, I start to have issues with the direction things are going. First of all, it is unclear to me why there needs to be a new mandated technical standard that is rooted in a new hardware device. There are already hardware devices that intermediate between the proprietary backend network and the home — set top boxes and (increasingly) centralized media boxes that redistribute content to the rest of the home via simpler adapters. I don’t see the benefit of mandating something different if the functionality can simply be built into the existing devices that every consumer receives. It is unlikely to be any cheaper to invent a new hardware standard to do the same thing via a new protocol standard. It is better, it seems, to let the market innovate when it comes to devices that connect to the backend and simply mandate that any such device (including today’s set-top-boxes) provide a standard hardware interface (such as the USB, firewire, or ethernet already on such boxes) that exposes a yet-to-be-defined software interface for sending and receiving navigation information. Essentially, if you can input queries via the remote or view information via the screen, you should be able to do the same thing using something like a standard and extensible XML query-response protocol. In this case, the FCC’s proposal could be implemented as a software update. This is indeed what seems to be hinted at in paragraph 21 of the CableCard Notice of Proposed Rulemaking that the FCC released on the same day as the AllVid NOI:

21. We also tentatively conclude that we should require cable operators to enable bi-directional communication over these interfaces. We propose that, at a minimum, these interfaces should be able to receive remote-control commands from a connected device. We also propose to require that these outputs deliver video in any industry standard format to ensure that video made available over these interfaces can be received and displayed by devices manufactured by unaffiliated manufacturers. We believe that these proposals will improve the functionality of retail consumer electronics devices significantly. We seek comment on this proposed rule and tentative conclusions. We also seek specific comment on whether cable operators could implement these changes inexpensively with firmware upgrades, and if so, whether January 1, 2011 would be a reasonable effective date for such a rule change. If not, we encourage commenters to propose an effective date for this proposed rule change based on how complex it would be to execute.

Of course, this depends on “smart video devices” being able to receive the digital video signal coming from one of these devices. I’m not sure we need them to output the video itself via these interfaces. Currently, the HDMI port typically outputs a signal that could be plugged straight into a standard DVI port on any device, but for the fact that it has been encrypted with HDCP. The problem is not that the output is non-standard, but that it is encrypted, and the keys for decryption are held by a commercial party that is understandably hostile to competition. In that sense, the Commission’s suggestion in the AllVid NOI that the new adapters use DTCP-IP represents more of the same, but worse. Less is known about the proprietary standard because the specification is closely held by the five corporations that developed it, and their restrictions on licenses for implementers are likely to be even more restrictive than CableCard or HDCP. In any event, it is hard to imagine that under this mindset any device that allows fair use copying, editing, and redistribution will be certified by even a more lenient administrator.

At the end of the day, the market failure for smart video devices is not due to technology. It is due to the policy decision to mandate failed content protection measures in every stage of the home entertainment chain. Given that this policy decision is unlikely to change, my only hope for ever re-crossing the river seems to rest on the eventual death of the entrenched video distribution path at the hand of the Internet. Of course, some of the same interests are present there, and this battle will likely be perpetuated for some time to come.

Trying to Make Sense of the Comcast / Level 3 Dispute

[Update: I gave a brief interview to Marketplace Tech Report]

The last 48 hours has given rise to a fascinating dispute between Level 3 (a major internet backbone provider) and Comcast (a major internet service retailer). The dispute involves both technical principles and fuzzy facts, so I am writing this post more as an attempt to sort out the details in collaboration with commenters than as a definitive guide. Before we get to the facts, let’s define some terms:

Internet Backbone Provider: These are companies, like Level 3, that transport the majority of the traffic at the core of the Internet. I say the “core” because they don’t typically provide connections to the general public, and they do the majority of their routing using the Border Gateway Protocol (BGP) and deliver traffic from one Autonomous System (AS) to another. Each backbone provider is its own AS, but so are Internet Service Retailers. Backbone providers will often agree to “settlement free peering with each other in which they deliver each others’ traffic for no fee.

Internet Service Retailers: These are companies that build the “last mile” of internet infrastructure to the general public and sell service. I’ve called them “Retailers” even though most people have traditionally called them Internet Service Providers (the ISP term can get confusing). Retailers sign up customers with the promise of connecting them to the backbone, and then sign “transit” agreements to pay the backbone providers for delivering the traffic that their customers request.

Content Delivery Networks: These are companies like Akamai that provide an enhanced service compared to backbone providers because they specialize in physically locating content closer to the edges (such that many copies of the content are stored in a part of the network that is closer to end-users). The benefit of this is that the content is theoretically faster and more reliable for end-users to access because it has to traverse less “hops.” CDNs will often sign agreements with Retailers to interconnect at many locations that are close to the end-users, and even to rent space to put their servers in the Retailer’s facilities (a practice called co-location).

Akamai and LimeLight Networks have traditionally provided delivery of Netflix content to Comcast customers as CDNs, and paid Comcast for local interconnection and colocation. Level 3, on the other hand, has a longstanding transit agreement with Comcast in which Comcast pays Level 3 to provide its customers with access to the internet backbone. Level 3 signed a deal with Netflix to become the primary provider of their content instead of the existing CDNs. Rather than change its business relationship with Comcast to something more akin to a CDN, in which it pays to locally interconnect and colocate, Level 3 hoped to continue to be paid by Comcast for providing backbone connectivity for its customers. Evidently, it thought that the current terms of its transit agreement with Comcast provided sufficient speed and reliability to satisfy Netflix. Comcast realized that they would simultaneously be losing the revenue from the existing CDNs that paid them for local services, and it would have to pay Level 3 more for backbone connectivity because more traffic would be traversing those links (apparently a whole lot). Comcast decided to try to instead charge Level 3, which didn’t sound like a good deal to Level 3. Level 3 published a press release saying Comcast was trying to unfairly leverage their exclusive control of end-users. Comcast sent a letter to the FCC saying that nothing unfair was going on and this was just a run-of-the-mill peering dispute. Level 3 replied that it was no such thing. [Updates: Comcast told the FCC that they they really do originate a lot of traffic and should be considered a backbone provider. Level 3 released their own FAQ, discussing the peering issue as well as the competitive issues. AT&T blogged in support of Comcast, Level 3 said that AT&T “missed the point completely.”]

Comcast’s attempt to describe the dispute as something akin to a peering dispute between backbone providers strikes me as misleading. Comcast is not a backbone provider that can deliver packets to an arbitrary location on the internet (a location that many other backbone providers might also be able to deliver to). Instead, Comcast is representing only its end-users, and it is doing so exclusively. What’s more, it has never had a settlement-free peering agreement with Level 3 (always transit, with Comcast paying). [Edit: see my clarification below in which I raise the possibility that it may have had both agreements at the same time, but relating to different traffic.] Indeed, the very nature of retail broadband service is that download quantity (or the traffic going into the Comcast AS) far exceeds upload quantity. In Comcast’s view of the world, therefore, all of their transit agreements should be reversed such that the backbone providers pay them for the privilege of reaching their users.

Why is this a problem? Won’t the market sort it out? First, the backbone market is still relatively competitive, and within that market I think that economic forces stand a reasonable chance of finding the optimal efficiency and leave relatively less room for anti-competitive shenanigans. However, these market dynamics can fall apart when you add to the mix last-mile providers. Last mile providers by their nature have at least a temporary monopoly on serving a given customer and often (in the case of a provider like Comcast) a local near-monopoly on high-performance broadband service altogether. Historically, the segmentation between the backbone market and the last-mile market has prevented shenanigans in the latter from seeping into the former. Two significant changes have occurred that alter this balance: 1) Comcast has grown to the size that it exerts tremendous power over a large portion of the broadband retail customers, with far less competition than in the past (for example the era of dial-up) and 2) Level 3 has sought to become the exclusive provider of certain desirable online content, but without the same network and business structure as traditional CDNs.

The market analysis becomes even more complicated in a scenario in which the last-mile provider has a vertically integrated service that competes with services being provided over the backbone provider with which it interconnects. Comcast’s basic video service clearly competes with Netflix and other internet video. In addition, Comcast’s TV Everywhere service (in partnership with HBO) competes with other computer-screen on-demand video services. Finally, the pending Comcst/NBCU merger (under review by the FCC and DoJ) implicates Hulu and a far greater degree of vertical integration with content providers. This means that in addition to its general incentives to price-squeeze backbone providers, Comcast clearly has incentive to discriminate against other online video providers (either by altering speed or by charging more than what a competitive market would yield).

But what do you all think? You may also find it worthwhile to slog through some of the traffic on the NANOG email list, starting roughly here.

[Edit: I ran across this fascinating blog post on the issue by Global Crossing, a backbone provider similar to Level 3.]

[Edit: Take a look at this fantastic overview of the situation in a blog post from Adam Rothschild.]

Broadband Politics and Closed-Door Negotiations at the FCC

The last seven days at the FCC have been drama-filled, and that’s not something you can often say about an administrative agency. As I noted in my last post, the FCC is considering reclassifying broadband as a “common carrier” service. This would subject the access portion of the service to some additional regulations which currently do not apply, but have (to some extent) been applied in the past. Last Thursday, the FCC voted 3-2 along party lines to pursue a Notice of Inquiry about this approach and others, in order to help solidify its ability to enforce consumer protections and implement the National Broadband Plan in the wake of the Comcast decision in the DC Circuit Court. There was a great deal of politicking and rhetoric around the vote. Then, on Monday, the Wall Street Journal reported that lobbyists were engaged in closed-door meetings at the FCC, discussing possible legislative compromises that would obviate the need for reclassification. This led to public outcry from everyone who was not involved in the meetings, and allegations of misconduct by the FCC for its failure to disclose the meetings. If you sit through my description of the intricacies of reclassification, I promise to give you the juicy bits about the controversial meetings.

The Reclassification Vote and the NOI
As I explained in my previous post, the FCC faces a dilemma. The DC Circuit said it did not have the authority under Title I of the Communications Act to enforce the broadband openness principles it espoused in 2005. This cast into doubt the FCC’s ability to not only police violations of the principles but also to implement many portions of the National Broadband Plan. In the past, the Commission would have had unquestioned authority under Title II of the Act, but in a series of decisions from 2002-2007 it voluntarily “deregulated” broadband by classifying it as a Title I service. Chairman Genachowski has floated what he calls a “Third Way” approach in which broadband is not classified as a Title I service anymore, and is not subject to all provisions of Title II, but instead is classified under Title II but with extensive “forbearance” from portions of that title.

From a legal perspective, the main question is whether the FCC has the authority to reclassify the transmission component of broadband internet service as a Title II service. This gets into intricacies of how broadband service fits into statutory definitions of “information service” (aka Title I), “telecommunications”, “telecommunications service” (aka Title II), and the like. I was going to lay these out in detail, but in the interest of getting to the juicy stuff I will simply direct you to Harold Feld’s excellent post. For the “Third Way” approach to work, the FCC’s interpretation of a “telecommunications service” will have to be articulated to include broadband internet access while not also swallowing a variety of internet services that everyone thinks should remain unregulated — sites like Facebook, content delivery networks like Akamai, and digital media providers like Netflix. However, this narrow definition must not be so narrow that the FCC does not have jurisdiction to police the types of practices it is concerned about (for instance, providers should not be able to discriminate in their delivery of traffic simply by moving the discrimination from their transport layer of the network to the logical layer, or by partnering with an affiliated “ISP” that does discrimination for them). I am largely persuaded of Harold’s arguments, but the AT&T lobbyists present the other side as well. One argument that I don’t see anyone making (yet) is that presuming the transmission component is subject to Title II, the FCC would seem to have a much stronger argument for exercising ancillary jurisdiction with respect to interrelated components like non-facilities-based ISPs that rely on that transmission component.

The other legal debate involves an even more arcane discussion about whether — assuming there is a “telecommunications service” offered as part of broadband service — that “telecommunications service” is something that can be regulated separately from the other “information services” (Title I) that might be offered along with it. This includes things like an email address from your provider, DNS, Usenet, and the like. Providers have historically argued that these were inseparable from the internet access component, and the so-called “Stevens Report” of 1998 introduced the notion that the “inextricably intertwined” nature of broadband service might have the result of classifying all such services as entirely Title I “information services.” To the extent that this ever made any sense, it is far from true today. What consumers believe they are purchasing is access to the internet, and all of those other services are clearly extricable from a definitional and practical standpoint (indeed, customers can and do opt for competitors for all of them on a regular basis).

But none of these legal arguments are at the fore of the current debate, which is almost entirely political. Witness, for example, John Boehner’s claim that the “Third Way” approach was a “government takeover of the Internet,” Fred Upton’s (R-MI) claim that the approach is a “blind power grab,” modest Democratic sign-on to an industry-penned and reasoning-free opposition letter, and an attempt by Republican appropriators to block funding for the FCC unless they swore off the approach. This prompted a strong response from Democratic leaders indicating that any such effort would not see the light of day. Ultimately, the FCC voted in favor of the NOI to explore the issue. Amidst this tumult, the WSJ reported that the FCC had started closed-door meetings with industry representatives in order to discuss a possible legislative compromise.

Possible Legislation and Secret Meetings
It is not against the rules to communicate with the FCC about active proceedings. Indeed, such communications are part of a healthy policymaking process that solicits input from stakeholders. The FCC typically conducts proceedings under the “permit but disclose” regime in which all discussions pertaining to the given proceeding must be described in “ex parte” filings on the docket. Ars has a good overview of the ex parte regime. The NOI passed last week is subject to these rules.

It therefore came as a surprise that a subset of industry players were secretly meeting with the FCC to discuss possible legislation that could make the NOI irrelevant. This issue is made even more egregious by the fact that the FCC just conducted a proceeding on improving ex parte disclosures, and the Chairman remarked:

“Given the complexity and importance of the issues that come before us, ex parte communications remain an essential part of our deliberative process. It is essential that industry and public stakeholders know the facts and arguments presented to us in order to express informed views.”

The Chairman’s Chief of Staff Edward Lazarus sought to explain away the obligation for ex parte disclosure, and nevertheless attached a brief disclosure letter from the meeting attendees that didn’t describe any of the details. There is perhaps a case to be made that the legislative options do not directly fall under the subject matter of the NOI, but even if this position were somehow legally justifiable it clearly falls afoul of the policy intent of the ex parte rules. Harold Feld has a great post in which he describes his nomination for “Worsht Ex Parte Ever“. The letter attached to the Lazarus post would certainly take the title if it were a formal ex parte letter. The industry participants in the meetings deserve some criticism, but ultimately the problems can only be resolved by the FCC by demanding comprehensive openness rather than perpetuating a culture of loopholes.

The public outcry continues, from both public interest groups and in the comments on the Lazarus post. If it’s true that the FCC admits internally that “they f*cked up”, they should do far more to regain the public’s trust in the integrity of the notice-and-comment process.

Update: The Lazarus post was just updated to replace the link to the brief disclosure letter with two new links to letters that describe themselves as Ex Parte letters. The first contains the exact same text as the original, and the second has a few bullet points.

Regulating and Not Regulating the Internet

There is increasingly heated rhetoric in DC over whether or not the government should begin to “regulate the internet.” Such language is neither accurate nor new. This language implies that the government does not currently involve itself in governing the internet — an implication which is clearly untrue given a myriad of laws like CFAA, ECPA, DMCA, and CALEA (not to mention existing regulation of consumer phone lines used for dialup and “special access” lines used for high speed interconnection). It is more fundamentally inaccurate because referring simply to “the internet” blurs important distinctions, like the difference between communications transport providers and the communications that occur over those lines.

However, there is a genuine policy debate being had over the appropriate framework for regulation by the Federal Communications Commission. In light of recent events, the FCC is considering revising the way it has viewed broadband since the mid-2000s, and Congress is considering revising the FCC’s enabling statute — the Communications Act. At stake is the overall model for government regulation of certain aspects of internet communication. In order to understand the significance of this, we have to take a step back in time.

Before 2005

In pre-American British law, there prevailed a concept of “common carriage.” Providers of transport services to the general public were required to conduct their business on equal and fair terms for all comers. The idea was that all of society benefited when these general-purpose services, which facilitated many types of other commerce and cultural activities, were accessible to all. This principle was incorporated into American law via common-law precedent and ultimately a series of public laws culminating in the Communications Act of 1934. The structure of the Act remains today, albeit with modifications and grafts. The original Act included two regulatory regimes: Title II regulated Common Carriers (telegraph and telephone, at the time), whereas Title III regulated Radio (and, ultimately, broadcast TV). By 1984, it became necessary to add Title VI for Cable (Titles IV and V have assorted administrative provisions), and in 1996 the Act was revised to focus the FCC on regulating for competition rather than assuming that some of these markets would remain monopolies. During this period, early access to the internet began to emerge via dial-up modems. In a series of decisions called the Computer Inquiries, the FCC decided that it would continue to regulate phone lines used to access the internet as common carriers, but it disclaimed direct authority over any “enhanced” services that those lines were used to connect to. The 1996 Telecommunications act called these “enhanced” services “information services”, and called the underlying telephone-based “basic” transport services “telecommunications services”. Thus the FCC both did and did not “regulate the internet” in this era.

In any event, the trifurcated nature of the Communications Act put it on a collision course with technology convergence. By the early 2000s, broadband internet access via Cable had emerged. DSL was being treated as a common carrier, but how should the FCC treat Cable-based broadband? Should it classify it as a Title II common carrier, a Title VI cable service, or something else?

Brand X and Its Progeny

This question arose during a period in which a generally deregulatory spirit prevailed at the FCC and in Congress. The 1996 Telecommunications Act contained a great deal of hopeful language about the flourishing competition that it would usher in, making unneccessary decades of overbearing regulation. At the turn of the milennium, a variety of revolutionary networking platforms seemed just around the corner. The FCC decided that it should remove as much regulation from broadband as possible, and it had to choose between two basic approaches. First, it could declare that Cable-based broadband service was essentially the same thing as DSL-based broadband service, and regulate it under Title II (aka, a “telecommunications service”). This had the advantage of being consistent with decades of precedent, but the disadvantage of introducing a new regulatory regime to a portion of the services offered by cable operators, who had never before been subject to that sort of thing (except in the 9th Circuit, but that’s another story). The 1996 Act had given the FCC the authority to “forbear” from any obligations that it deemed unnecessary due to sufficient competition, so the FCC could still “deregulate” broadband to a significant extent. The other option was to reclassify cable broadband as a Title I service (aka, an “information service”). What is Title I, you ask? Well, there’s very little in Title I of the Communications Act (take a look). It mostly contains general pronouncements of the FCC’s purpose, so classifying a service as such is a more extreme way of deregulating a service. How extreme? We will return to this.

The FCC chose this more extreme approach, announcing its decision in the 2002 Cable Modem Order. This set off a prolonged series of legal actions, pitting the deregulatory-spirited FCC against those that wanted cable to be regulated under Title II so that operators could be forced to provide “open access” to competitors who would use their last-mile infrastructure (the same way that the phone company must allow alternative long distance carriers today). This all culminated in a decision by the 9th Circuit that Title I classification was unacceptable, and a reversal of that decision by the Supreme Court in 2005. The case is commonly referred to by its shorthand, Brand X. The majority opinion essentially states that the statute is ambiguous as to whether cable broadband is a Title I “information service” or Title II “telecommunications service”, and the Court deferred to the expert-agency: the FCC. The FCC immediately followed up by reclassifying DSL-based broadband as a Title I service as well, in order to develop a, “consistent regulatory framework across platforms.” At the same time, it released a Policy Statement outlining the so-called “Four Freedoms” that nevertheless would guide FCC policy on broadband. The extent to which such a statement was binding and enforceable would be the subject of the next chapter of the debate on “regulating the internet.”

Comcast v. FCC

After Brand X and the failure of advocates to gain “open access” provisions on broadband generally, much of the energy in the space focused to a fallback position: at the very least, they argued, the FCC should enforce its Policy Statement (aka, the “Four Freedoms”) which seemed to embody the spirit of some components of the non-discriminatory legacy of common carriage. This position came to be known as “net neutrality,” although the term has been subject to a diversity of definitions over the years and is also only one part of a potentially broader policy regime. In 2008, the FCC was forced to confront the issue when it was discovered that Comcast had begun interfering with the Bittorrent traffic of customers. The FCC sought to discipline Comcast under its untested Title I authority, Comcast thought that it had no such authority, and the DC Circuit Court agreed with Comcast. It appears that the Title I approach to deregulation was more extreme than even the FCC thought (although ex-Chairman Powell had no problem blaming the litigation strategy of the current FCC). To be clear, the Circuit Court said that the FCC did not have authority under Title I. But, what if the FCC had taken the alternate path back in 2002, deciding to classify broadband as a Title II service and “forbear” from all of the portions of the statute deemed irrelevant? Can the FCC still choose that path today?

Reclassification

Chairman Genachowski recently announced a proposed approach that would reclassify the transport portion of broadband as a Title II service, while simultaneously forbearing from the majority of the statute. This approach is motivated by the fact that Comcast cast a pall over the FCC’s ability to fulfill its explicit mandate from Congress to develop a National Broadband Plan, which requires regulatory jurisdiction in order for the FCC to be able to implement many of its components. I will discuss the reclassification debate in my next post. I’ll be at a very interesting event in DC tomorrow morning on the subject, titled The FCC’s Authority Over Broadband Access. For a preview of some of what will be discussed there, I recommend FCC General Counsel’s presentation from yesterday (starting at 30 minutes in), and Jon Neuchterlein’s comments at this year’s Silicon Flatirons conference. I am told that the event tomorrow will not be streamed live, but that the video will be posted online shortly thereafter. I’ll update this post when that happens. You can also follow tweets at #bbauth. [Update: the video and transcripts for Panel 1 and Panel 2 are now posted]

A New Communications Act?

In parallel, there has been growing attention to a revision of the Communications Act itself. The theory here is that the old structure just simply doesn’t speak sufficiently to the current telecommunications landscape. I’ll do a follow-up post on this topic as well, mapping out the poles of opinion on what such a revised Act should look like.

Bonus: If you just can’t get enough history and contemporary context on the structure of communications regulation, I did an audio interview with David Weinberger back in January 2009.