November 22, 2024

Archives for 2010

How Not to Fix Soccer

With the World Cup comes the quadrennial ritual in which Americans try to redesign and improve the rules of soccer. As usual, it’s a bad idea to redesign something you don’t understand—and indeed, most of the proposed changes would be harmful. What has surprised me, though, is how rarely anyone explains the rationale behind soccer’s rules. Once you understand the rationale, the rules will make a lot more sense.

So here’s the logic underlying soccer’s rules: the game is supposed to scale down, so that an ordinary youth or recreation-league game can be played under the exact same rules used by the pros. This means that the rules must be designed so that the game can be run by a single referee, without any special equipment such as a scoreboard.

Most of the popular American team sports don’t scale down in this way. American football, basketball, and hockey — the most common inspirations for “reformed” soccer rules — all require multiple referees and special equipment. To scale these sports down, you have to change the rules. For example, playground basketball has no shot clock, no counting of fouls, and nonstandard rules for awarding free throws and handling restarts—it’s fun but it’s not the same game the Lakers play. Baseball is the one popular American spectator sport that does scale down.

The scaling principle accounts for soccer’s seemingly odd timekeeping. The clock isn’t stopped and started, because we can’t assume a separate timekeeping official and we don’t want to burden the referee’s attention with a lot of clock management. The time is not displayed to the players, because we can’t assume the availability of a scoreboard. And because the players don’t know the exact remaining time, the referee gives the players some leeway to finish an attack even if the nominal finishing time has been reached. Most of the scalable sports lack a clock — think of baseball and volleyball — but soccer manages to reconcile a clock with scalability. Americans often want to “fix” this by switching to a scheme that requires a scoreboard and timekeeper.

The scaling principle also explains the system of yellow and red cards. A hockey-style penalty box system requires special timing and (realistically) a special referee to manage the penalty box and timer. Basketball-style foul handling allows penalties to mount up as more fouls are committed by the same player or team, which is good, but it requires elaborate bookkeeping to keep track of fouls committed by each player and team fouls per half. We don’t want to make the soccer referee keep such detailed records, so we simply ask him to record yellow and red cards, which are rare. He uses his judgment to decide when repeated fouls merit a yellow card. This may seem arbitrary in particular cases but it does seem fair on average. (There’s a longer essay that could be written applying the theory of efficient liability regimes to the design of sports penalties.)

It’s no accident, I think, that scalable sports such as soccer and baseball/softball are played by many Americans who typically watch non-scalable sports. There’s something satisfying about playing the same game that the pros play. So, my fellow Americans, if you’re going to fix soccer, please keep the game simple enough that the rest of us can still play it.

Rebooting the CS Publication Process

The job of an academic is to conduct research, and that means publishing manuscripts for the world to read. Computer science is somewhat unusual, among the other disciplines in science and engineering, in that our primary research output goes to highly competitive conferences rather than journals. Acceptance rates at the “top” conferences are often 15% or lower, and the process of accepting those papers and rejecting the rest is famously problematic, particularly for the papers on the bubble.

Consequently, a number of computer scientists have been writing about making changes to the way we do what we do. Some changes may be fairly modest, like increasing acceptance rates by fiat, and eliminating printed paper proceedings to save costs. Other changes would be more invasive and require more coordination.

If we wanted to make a concerted effort to really overhaul the process, what would we do? If we can legitimately concern ourselves with “clean slate” redesign of the Internet as an academic discipline, why not look at our own processes in the same light? I raised this during the rump session of the last HotOS Workshop and it seemed to really get the room talking. The discipline of computer science is clearly ready to have this discussion.

Over the past few months, I’ve been working on and off to flesh out how a clean-slate publishing process might work, taking advantage of our ability to build sophisticated tools to manage the process, and including a story for how we might get from here to there. I’ve written this up as a manuscript and I’d like to invite our blog readers, academic or otherwise, to read it over and offer their feedback. At some point, I’ll probably compress this down to fit the tight word limit of a CACM article, but first things first.

Have a look. Post your feedback here on Freedom to Tinker or send me an email and I’ll followup, no doubt with a newer draft of my manuscript.

NJ Voting Machines Left Unattended, Despite Court Opinion

It’s Election Day in New Jersey. Longtime readers know that in advance of elections I visit polling places in Princeton, looking for voting machines left unattended, where they are vulnerable to tampering. In the past I have always found unattended machines in multiple polling places.

I hoped this time would be different, given that Judge Feinberg, in her ruling on the New Jersey voting machine case, urged the state not to leave voting machines unattended in public.

Despite the judge’s ruling, I found voting machines unattended in three of the four Princeton polling places I visited on Sunday and Monday. Here are my photos from three polling places.

This morning I cast my ballot on one of these machines.

Regulating and Not Regulating the Internet

There is increasingly heated rhetoric in DC over whether or not the government should begin to “regulate the internet.” Such language is neither accurate nor new. This language implies that the government does not currently involve itself in governing the internet — an implication which is clearly untrue given a myriad of laws like CFAA, ECPA, DMCA, and CALEA (not to mention existing regulation of consumer phone lines used for dialup and “special access” lines used for high speed interconnection). It is more fundamentally inaccurate because referring simply to “the internet” blurs important distinctions, like the difference between communications transport providers and the communications that occur over those lines.

However, there is a genuine policy debate being had over the appropriate framework for regulation by the Federal Communications Commission. In light of recent events, the FCC is considering revising the way it has viewed broadband since the mid-2000s, and Congress is considering revising the FCC’s enabling statute — the Communications Act. At stake is the overall model for government regulation of certain aspects of internet communication. In order to understand the significance of this, we have to take a step back in time.

Before 2005

In pre-American British law, there prevailed a concept of “common carriage.” Providers of transport services to the general public were required to conduct their business on equal and fair terms for all comers. The idea was that all of society benefited when these general-purpose services, which facilitated many types of other commerce and cultural activities, were accessible to all. This principle was incorporated into American law via common-law precedent and ultimately a series of public laws culminating in the Communications Act of 1934. The structure of the Act remains today, albeit with modifications and grafts. The original Act included two regulatory regimes: Title II regulated Common Carriers (telegraph and telephone, at the time), whereas Title III regulated Radio (and, ultimately, broadcast TV). By 1984, it became necessary to add Title VI for Cable (Titles IV and V have assorted administrative provisions), and in 1996 the Act was revised to focus the FCC on regulating for competition rather than assuming that some of these markets would remain monopolies. During this period, early access to the internet began to emerge via dial-up modems. In a series of decisions called the Computer Inquiries, the FCC decided that it would continue to regulate phone lines used to access the internet as common carriers, but it disclaimed direct authority over any “enhanced” services that those lines were used to connect to. The 1996 Telecommunications act called these “enhanced” services “information services”, and called the underlying telephone-based “basic” transport services “telecommunications services”. Thus the FCC both did and did not “regulate the internet” in this era.

In any event, the trifurcated nature of the Communications Act put it on a collision course with technology convergence. By the early 2000s, broadband internet access via Cable had emerged. DSL was being treated as a common carrier, but how should the FCC treat Cable-based broadband? Should it classify it as a Title II common carrier, a Title VI cable service, or something else?

Brand X and Its Progeny

This question arose during a period in which a generally deregulatory spirit prevailed at the FCC and in Congress. The 1996 Telecommunications Act contained a great deal of hopeful language about the flourishing competition that it would usher in, making unneccessary decades of overbearing regulation. At the turn of the milennium, a variety of revolutionary networking platforms seemed just around the corner. The FCC decided that it should remove as much regulation from broadband as possible, and it had to choose between two basic approaches. First, it could declare that Cable-based broadband service was essentially the same thing as DSL-based broadband service, and regulate it under Title II (aka, a “telecommunications service”). This had the advantage of being consistent with decades of precedent, but the disadvantage of introducing a new regulatory regime to a portion of the services offered by cable operators, who had never before been subject to that sort of thing (except in the 9th Circuit, but that’s another story). The 1996 Act had given the FCC the authority to “forbear” from any obligations that it deemed unnecessary due to sufficient competition, so the FCC could still “deregulate” broadband to a significant extent. The other option was to reclassify cable broadband as a Title I service (aka, an “information service”). What is Title I, you ask? Well, there’s very little in Title I of the Communications Act (take a look). It mostly contains general pronouncements of the FCC’s purpose, so classifying a service as such is a more extreme way of deregulating a service. How extreme? We will return to this.

The FCC chose this more extreme approach, announcing its decision in the 2002 Cable Modem Order. This set off a prolonged series of legal actions, pitting the deregulatory-spirited FCC against those that wanted cable to be regulated under Title II so that operators could be forced to provide “open access” to competitors who would use their last-mile infrastructure (the same way that the phone company must allow alternative long distance carriers today). This all culminated in a decision by the 9th Circuit that Title I classification was unacceptable, and a reversal of that decision by the Supreme Court in 2005. The case is commonly referred to by its shorthand, Brand X. The majority opinion essentially states that the statute is ambiguous as to whether cable broadband is a Title I “information service” or Title II “telecommunications service”, and the Court deferred to the expert-agency: the FCC. The FCC immediately followed up by reclassifying DSL-based broadband as a Title I service as well, in order to develop a, “consistent regulatory framework across platforms.” At the same time, it released a Policy Statement outlining the so-called “Four Freedoms” that nevertheless would guide FCC policy on broadband. The extent to which such a statement was binding and enforceable would be the subject of the next chapter of the debate on “regulating the internet.”

Comcast v. FCC

After Brand X and the failure of advocates to gain “open access” provisions on broadband generally, much of the energy in the space focused to a fallback position: at the very least, they argued, the FCC should enforce its Policy Statement (aka, the “Four Freedoms”) which seemed to embody the spirit of some components of the non-discriminatory legacy of common carriage. This position came to be known as “net neutrality,” although the term has been subject to a diversity of definitions over the years and is also only one part of a potentially broader policy regime. In 2008, the FCC was forced to confront the issue when it was discovered that Comcast had begun interfering with the Bittorrent traffic of customers. The FCC sought to discipline Comcast under its untested Title I authority, Comcast thought that it had no such authority, and the DC Circuit Court agreed with Comcast. It appears that the Title I approach to deregulation was more extreme than even the FCC thought (although ex-Chairman Powell had no problem blaming the litigation strategy of the current FCC). To be clear, the Circuit Court said that the FCC did not have authority under Title I. But, what if the FCC had taken the alternate path back in 2002, deciding to classify broadband as a Title II service and “forbear” from all of the portions of the statute deemed irrelevant? Can the FCC still choose that path today?

Reclassification

Chairman Genachowski recently announced a proposed approach that would reclassify the transport portion of broadband as a Title II service, while simultaneously forbearing from the majority of the statute. This approach is motivated by the fact that Comcast cast a pall over the FCC’s ability to fulfill its explicit mandate from Congress to develop a National Broadband Plan, which requires regulatory jurisdiction in order for the FCC to be able to implement many of its components. I will discuss the reclassification debate in my next post. I’ll be at a very interesting event in DC tomorrow morning on the subject, titled The FCC’s Authority Over Broadband Access. For a preview of some of what will be discussed there, I recommend FCC General Counsel’s presentation from yesterday (starting at 30 minutes in), and Jon Neuchterlein’s comments at this year’s Silicon Flatirons conference. I am told that the event tomorrow will not be streamed live, but that the video will be posted online shortly thereafter. I’ll update this post when that happens. You can also follow tweets at #bbauth. [Update: the video and transcripts for Panel 1 and Panel 2 are now posted]

A New Communications Act?

In parallel, there has been growing attention to a revision of the Communications Act itself. The theory here is that the old structure just simply doesn’t speak sufficiently to the current telecommunications landscape. I’ll do a follow-up post on this topic as well, mapping out the poles of opinion on what such a revised Act should look like.

Bonus: If you just can’t get enough history and contemporary context on the structure of communications regulation, I did an audio interview with David Weinberger back in January 2009.

Privacy Theater

I have a piece in today’s NY Times “Room for Debate” feature, on whether the government should regulate Facebook. In writing the piece, I was looking for a pithy way to express the problems with today’s notice-and-consent model for online privacy. After some thought, I settled on “privacy theater”.

Bruce Schneier has popularized the term “security theater,” denoting security measures that look impressive but don’t actually protect us—they create the appearance of security but not the reality. When a security guard asks to see your ID but doesn’t do more than glance at it, that’s security theater. Much of what happens at airport checkpoints is security theater too.

Privacy theater is the same concept, applied to privacy. Facebook’s privacy policy runs to almost 6000 words of dense legalese. We are all supposed to have read it and agreed to accept its terms. But that’s just theater. Hardly any of us have actually read privacy policies, and even fewer consider carefully their provisions. As I wrote in the Times piece, we pretend to have read sites’ privacy policies, and the sites pretend that we have understood and consented to all of their terms. It’s privacy theater.

Worse yet. privacy policies are subject to change. When sites change their policies, we get another round of privacy theater, in which sites pretend to notify us of the changes, and we pretend to consider them before continuing our use of the site.

And yet, if we’re going to replace the notice-and-consent model, we need something else to put in its place. At this point, It’s hard to see what that might be. It might help to set up default rules, on the theory that a policy that states how it differs from the default might be shorter and simpler than a stand-alone policy, but that approach will only go so far.

In the end, we may be stuck with privacy theater, just as we’re often stuck with security theater. If we can’t provide the reality of privacy or security, we can settle for theater, which at least makes us feel a bit better about our vulnerability.