December 3, 2024

Comcast Gets Slapped, But the FCC Wisely Leaves its Options Open

The FCC’s recent Comcast action—whose full text is unavailable as yet, though it was described in a press release and statements from each comissioner—is a lesson in the importance of technological literacy for policymaking. The five commissioners’ views, as reflected in their statements, are strongly correlated to the degree of understanding of the fact pattern that each commissioner’s statement reveals. Both dissenting commissioners, it turns out, materially misunderstood the technical facts on which they assert their decisions were based. But the majority, despite technical competence, avoided a bright line rule—and that might itself turn out to be great policy.

Referring to what she introduces as the “BitTorrent-Comcast controversy,” dissenting Commissioner Tate writes that after the FCC began to look into the matter, “the two parties announced on March 27 an agreement to collaborate in managing web traffic and to work together to address network management and content distribution.” Where private parties can agree among themselves, Commissioner Tate sensibly argues, regulators ought to stand back. But as Ed and others have pointed out before, this has never been a two-party dispute. BitTorrent, Inc., which negotiated with Comcast, doesn’t have the power to redefine the open BitTorrent protocol whose name it shares. Anyone can write client software to share files using today’s version of the Bittorrent protocol – and no agreement between Comcast and BitTorrent, Inc. could change that. Indeed, if the protocoal were modified to buy overall traffic reductions by slowing downloads for individual users, one might expect many users to decline to switch. For this particular issue to be resolved among the parties, Comcast would have to negotiate with all (or at least most of) the present and future developers of Bittorrent clients. A private or mediated resolution among the primary actors involved in this dispute has not taken place and isn’t, as far as I know, currently being attempted. So while I share Ms. Tate’s wise preference for mediation and regulatory reticence, I don’t think her view in this particular case is available to anyone who fully understands the technical facts.

The other dissenting commissioner, Robert McDowell, shares Ms. Tate’s confusion about who the parties to the dispute are, chastising the majority for going forward after Comcast and BitTorrent, Inc. announced their differences settled. He’s also simply confused about the technology, writing that “the vast majority of consumers” “do not use P2P software to watch YouTube” when (a) YouTube isn’t delivered over P2P software, so its traffic numbers don’t speak to the P2P issue and (b) YouTube is one of the most popular sites on the web, making it very unlikely that the “vast majority of consumers” avoid the site. Likewise, he writes that network management allows companies to provide “online video without distortion, pops, and hisses,” analog problems that aren’t faced by digital media.

The majority decision, in finding Comcast’s activities collectively to be over the line from “reasonable network management,” leaves substantial uncertainty about where that line lies, which is another way of saying that the decision makes it hard for other ISPs to predict what kinds of network management, short of what Comcast did, would prompt sanctions in the future. For example, what if Comcast or another ISP were to use the same tools only to target BitTorrent files that appear, after deep packet inspection, to violate copyright? The commissioners were at pains to emphasize that networks are free to police their networks for illegal content. But a filter designed to impede transfer of most infringing video would be certain to generate a significant number of false positives, and the false positives (that is, transfers of legal video impeded by the filter) would act as a thumb on the scales in favor of traditional cable service, raising the same body of concerns about competition that the commissioners cite as a background factor informing their decision to sanction Comcast. We don’t know how that one would turn out.

McDowell’s brief highlights the ambiguity of the finding. He writes: “This matter would have had a better chance on appeal if we had put the horse before the cart and conducted a rulemaking, issued rules and then enforced them… The majority’s view of its ability to adjudicate this matter solely pursuant to ancillary authority is legally deficient as well. Under the analysis set forth in the order, the Commission apparently can do anything so long as it frames its actions in terms of promoting the Internet or broadband deployment.”

Should the commissioners have adopted a “bright line” rule, as McDowell’s dissent suggests? The Comcast ruling’s uncertainty guarantees a future of envelope-pushing and resource intensive, case-by-case adjudication, whether in regulatory proceedings or the courts. But I actually think that might be the best available alternative here. It preserves the Commission’s ability to make the right decision in future cases without having to guess, today, what precise rule would dictate those future results. (On the flip side, it also preserves the Commission’s ability to make bad choices in the future, especially if diminished public interest in the issue increases the odds of regulatory capture.) If Jim Harper is correct that Martin’s support is a strategic gambit to tie the issue up while broadband service expands, this suggests that Martin believes, as I do, that uncertainty about future interventions is a good way to keep ISPs on their best behavior.

What's the Cyber in Cyber-Security?

Recently Barack Obama gave a speech on security, focusing on nuclear, biological, and infotech threats. It was a good, thoughtful speech, but I couldn’t help noticing how, in his discussion of the infotech threats, he promised to appoint a “National Cyber Advisor” to give the president advice about infotech threats. It’s now becoming standard Washington parlance to say “cyber” as a shorthand for what many of us would call “information security.” I won’t fault Obama for using the terminology spoken by the usual Washington experts. Still, it’s interesting to consider how Washington has developed its own terminology, and what that terminology reveals about the inside-the-beltway view of the information security problem.

The word “cyber” has interesting roots. It started with an old Greek word meaning (roughly) one who guides a boat, such as a pilot or rudder operator. Plato adapted this word to mean something like “governance”, on the basis that governing was like steering society. Already in ancient Greece, the term had taken on connotations of central government control.

Fast-forward to the twentieth century. Norbert Wiener foresaw the rise of sophisticated robots, and realized that a robot would need something like a brain to control its mechanisms, as your brain controls your body. Wiener predicted correctly that this kind of controller would be difficult to design and build, so he sought a word to describe the study of these “intelligent” controllers. Not finding a suitable word in English, he reached back to the old Greek word, which he transliterated into English as “cybernetics”. Notice the connection Wiener drew between governance and technological control.

Enter William Gibson. In his early novels about the electronic future, he wanted a term for the “space” where online interactions happen. Failing to find a suitable word, he coined one – cyberspace – by borrowing “cyber” from Wiener. Gibson’s 1984 novel Neuromancer popularized the term. Many of the Net’s early adopters were fans of Gibson’s work, so cyberspace became a standard name for the place you went when you were on the Net.

The odd thing about this usage is that the Internet lacks the kind of central control system that is the subject matter of cybernetics. Gibson knew this – his vision of the Net was decentralized and chaotic – be he liked the term anyway.

All I knew about the word “cyberspace” when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page.

Indeed, the term proved just as evocative for others as it was for Gibson, and it stuck.

As the Net grew, it was widely seen as ungovernable – which many people liked. John Perry Barlow’s “Declaration of Independence of Cyberspace” famously declared that governments have no place in cyberspace. Barlow notwithstanding, government did show up in cyberspace, but it has never come close to the kind of cybernetic control Wiener envisioned.

Meanwhile, the government’s security experts settled on a term, “information security”, or “infosec” for short, to describe the problem of securing information and digital systems. The term is widely used outside of government (along with similar terms “computer security” and “network security”) – the course I teach at Princeton on this topic is called “information security”, and many companies have Chief Information Security Officers to manage their security exposure.

So how did this term “cybersecurity” get mindshare, when we already had a useful term for the same thing? I’m not sure – give me your theories in the comments – but I wouldn’t be surprised if it reflects a military influence on government thinking. As both military and civilian organizations became wedded to digital technology, the military started preparing to defend certain national interests in an online setting. Military thinking on this topic naturally followed the modes of thought used for conventional warfare. Military units conduct reconnaissance; they maneuver over terrain; they use weapons where necessary. This mindset wants to think of security as defending some kind of terrain – and the terrain can only be cyberspace. If you’re defending cyberspace, you must be doing something called cybersecurity. Over time, “cybersecurity” somehow became “cyber security” and then just “cyber”.

Listening to Washington discussions about “cyber”, we often hear strategies designed to exert control or put government in a role of controlling, or at least steering, the evolution of technology. In this community, at least, the meaning of “cyber” has come full circle, back to Wiener’s vision of technocratic control, and Plato’s vision of government steering the ship.

Online Symposium: Voluntary Collective Licensing of Music

Today we’re kicking off an online symposium on voluntary collective licensing of music, over at the Center for InfoTech Policy site.

The symposium is motivated by recent movement in the music industry toward the possibility of licensing large music catalogs to consumers for a fixed monthly fee. For example, Warner Music, one of the major record companies, just hired Jim Griffin to explore such a system, in which Internet Service Providers would pay a per-user fee to record companies in exchange for allowing the ISPs’ customers to access music freely online. The industry had previously opposed collective licenses, making them politically non-viable, but the policy logjam may be about to break, making this a perfect time to discuss the pros and cons of various policy options.

It’s an issue that evokes strong feelings – just look at the comments on David’s recent post.

We have a strong group of panelists:

  • Matt Earp is a graduate student in the i-school at UC Berkeley, studying the design and implementation of voluntary collective licensing systems.
  • Ari Feldman is a Ph.D. candidate in computer science at Princeton, studying computer security and information policy.
  • Ed Felten is a Professor of Computer Science and Public Affairs at Princeton.
  • Jon Healey is an editorial writer at the Los Angeles Times and writes the paper’s Bit Player blog, which focuses on how technology is changing the entertainment industry’s business models.
  • Samantha Murphy is an independent singer/songwriter and Founder of SMtvMusic.com.
  • David Robinson is Associate Director of the Center for InfoTech Policy at Princeton.
  • Fred von Lohmann is a Senior Staff Attorney at the Electronic Frontier Foundation, specializing in intellectual property matters.
  • Harlan Yu is a Ph.D. candidate in computer science at Princeton, working at the intersection of computer science and public policy.

Check it out!

Music Industry Under Fire for Exploring EFF Suggestion

Jim Griffin, a music industry consultant who is in the unusual position of being recognized as smart and reasonable by participants across a broad swath of positions in the copyright debate, revealed last week that he’s working to start a new music industry organization that will urge ISPs to bundle a music licensing fee into their monthly service costs, in exchange for which the major labels will agree not to sue (and, presumably, not to threaten suit against) the ISP’s customers for copyright infringement of the music whose rights they own. The goal, Griffin says, is to “monetize the anarchy of the Internet.”

This idea has a long history and has at various times been propounded by some on the “copyleft.” The Electronic Frontier Foundation, for example, issued in April 2004 a report entitled “A Better Way Forward: Voluntary Collective Licensing of Music File Sharing“. This report even suggested the $5 per user per month ($60 per user per year) that Griffin apparently has in mind.

According to the OECD, there were roughly 60 million broadband subscriptions in the United States as of the end of 2006. If each of these were to pay $60 a year, the total would be $3.6 billion a year. I know that broadband uptake is increasing, but I remain unsure how Griffin figures that the proposed system “could create a pool as large as $20 billion a year.” Perhaps this imagines global, rather than national, uptake of the plan? If so, it seems to embody some optimistic assumptions about how widely any such agreement could plausibly be extended.

Some prominent blogs have reacted with ire—Michael Arrington at TechCrunch, for example, characterizes the move as an “extortion scheme.” Arrington argues that a licensing system will hinder innovation because the revenues from it will be constant irrespective of the amount or quality of music published by the labels, and will flow to an infrastructure that, once it begins to be subsidized, will have little structural incentive to innovate. He also argues in a later post that since the core of the system is a covenant not to sue, it represents a “protection racket.”

I think this kind of skepticism is poorly justified at this point. If the labels can turn their statutory right to sue for damages after copyright infringement into a voluntary system where they get paid and nobody gets sued, it strikes me as a case of the system working. And the numbers matter: The idea of a $20 billion payoff that would triple the industry’s current $10 billion in annual revenue does not seem reasonable, but unless I am missing something it also does not seem probable.

There are two core questions for the plan. First, what will it cover? The idea is that it will let the industry stop suing, and thereby end the antagonism between labels and customers. But unless a critical mass of the labels agree to the plan, users whose ISPs are paying in will still face the risk of suit from non-participating copyright holders. In fact, if the plan takes off, individual rights holders may face an incentive to defect, since consumers are equally likely to infringe all popular music regardless of which music happens to be covered by the plan (since they aren’t likely to track which music is covered).

Second, how will the revenue be shared? Filesharing metrics, provided by analysts like BigChampagne, are at best approximate, and they only track downloads that occur via the public, unencrypted Internet–presumably a large share of the relevant copying, but not all of it, especially in the context of University and other networks. The squabbles will be challenging, and if past is prologue, then the labels may not prove themselves an amicable bunch in negotiating with each other.

Finally, it’s important to remember that the labels’ power depends, in the very long run, on their ability to sign the best new talent. If the licensing system proposed by Griffin takes off, it may preserve the status quo for now. But if the industry continues to give artists themselves a raw deal, as it is so often accused of doing, artists will still have the growing power that digital technology gives them to share their music without a label’s help.

Comcast and BitTorrent: Why You Can't Negotiate with a Protocol

The big tech policy news yesterday was Comcast’s announcement that it will stop impeding BitTorrent traffic, but instead will respond to network congestion by slowing traffic from the highest-volume users, regardless of what those users are doing. Comcast also announced a deal with BitTorrent, aimed at developing more effective ways of channeling peer-to-peer traffic through networks.

It may seem natural to respond to a network issue involving BitTorrent by making a deal with BitTorrent – and much of the reporting and commentary has taken that line – but there is something odd about the BitTorrent deal, which only becomes clear when we unpack the difference between the BitTorrent protocol and the BitTorrent company. The BitTorrent protocol is a set of technical rules used by desktop software programs to coordinate the peer-to-peer distribution of files. The company BitTorrent Inc. is just one maker of software that uses the protocol – indeed, it’s a relatively minor player in that market. Most people who use the BitTorrent protocol don’t use software from BitTorrent Inc.

What this means is that changes in BitTorrent Inc’s products won’t have much effect on Comcast’s network. What Comcast needs, if it wants to change conditions in its network, is to change the BitTorrent protocol.

The problem is that you can’t negotiate with a protocol, for the same reason that you can’t negotiate with (say) the English language. You can use the language to negotiate with someone, but you can’t have a negotiation where the other party is the language. You can negotiate with the Queen of England, or English Department at Princeton, or the people who publish the most popular dictionary. But the language itself just isn’t the kind of entity that can make an agreement or have an intention.

This property of protocols – that you can’t get a meeting with them, convince them to change their behavior, or make a deal with them – seems especially challenging to some Washington policymakers. If, as they do, you live in a world driven by meetings and deal-making, a world where problem-solving means convincing someone to change something, then it’s natural to think that every protocol, and every piece of technology, must be owned and managed by some entity.

Engineers sometimes make a similar mistake in thinking about technology markets. We like to think that technologies are designed by engineers, but often it’s more accurate to say that some technology was designed by a market. And where the market is in charge, there is nobody to call when the technology needs to be changed.

Will Comcast and BitTorrent Inc. succeed in improving the BitTorrent protocol? Maybe. But it won’t be enough simply to have a better protocol. They’ll also have to convince the population of BitTorrent users to switch.

UPDATE (April 2): A reader points out that BitTorrent Inc bought uTorrent, one of the popular client programs implementing the BitTorrent protocol. This means that BitTorrent Inc has more leverage to force adoption of new protocol versions than I had thought. Still, I stand by the basic point of the post, that BitTorrent Inc doesn’t have unilateral power to change the protocol.