November 23, 2024

How Not to Fix Soccer

With the World Cup comes the quadrennial ritual in which Americans try to redesign and improve the rules of soccer. As usual, it’s a bad idea to redesign something you don’t understand—and indeed, most of the proposed changes would be harmful. What has surprised me, though, is how rarely anyone explains the rationale behind soccer’s rules. Once you understand the rationale, the rules will make a lot more sense.

So here’s the logic underlying soccer’s rules: the game is supposed to scale down, so that an ordinary youth or recreation-league game can be played under the exact same rules used by the pros. This means that the rules must be designed so that the game can be run by a single referee, without any special equipment such as a scoreboard.

Most of the popular American team sports don’t scale down in this way. American football, basketball, and hockey — the most common inspirations for “reformed” soccer rules — all require multiple referees and special equipment. To scale these sports down, you have to change the rules. For example, playground basketball has no shot clock, no counting of fouls, and nonstandard rules for awarding free throws and handling restarts—it’s fun but it’s not the same game the Lakers play. Baseball is the one popular American spectator sport that does scale down.

The scaling principle accounts for soccer’s seemingly odd timekeeping. The clock isn’t stopped and started, because we can’t assume a separate timekeeping official and we don’t want to burden the referee’s attention with a lot of clock management. The time is not displayed to the players, because we can’t assume the availability of a scoreboard. And because the players don’t know the exact remaining time, the referee gives the players some leeway to finish an attack even if the nominal finishing time has been reached. Most of the scalable sports lack a clock — think of baseball and volleyball — but soccer manages to reconcile a clock with scalability. Americans often want to “fix” this by switching to a scheme that requires a scoreboard and timekeeper.

The scaling principle also explains the system of yellow and red cards. A hockey-style penalty box system requires special timing and (realistically) a special referee to manage the penalty box and timer. Basketball-style foul handling allows penalties to mount up as more fouls are committed by the same player or team, which is good, but it requires elaborate bookkeeping to keep track of fouls committed by each player and team fouls per half. We don’t want to make the soccer referee keep such detailed records, so we simply ask him to record yellow and red cards, which are rare. He uses his judgment to decide when repeated fouls merit a yellow card. This may seem arbitrary in particular cases but it does seem fair on average. (There’s a longer essay that could be written applying the theory of efficient liability regimes to the design of sports penalties.)

It’s no accident, I think, that scalable sports such as soccer and baseball/softball are played by many Americans who typically watch non-scalable sports. There’s something satisfying about playing the same game that the pros play. So, my fellow Americans, if you’re going to fix soccer, please keep the game simple enough that the rest of us can still play it.

NJ Voting Machines Left Unattended, Despite Court Opinion

It’s Election Day in New Jersey. Longtime readers know that in advance of elections I visit polling places in Princeton, looking for voting machines left unattended, where they are vulnerable to tampering. In the past I have always found unattended machines in multiple polling places.

I hoped this time would be different, given that Judge Feinberg, in her ruling on the New Jersey voting machine case, urged the state not to leave voting machines unattended in public.

Despite the judge’s ruling, I found voting machines unattended in three of the four Princeton polling places I visited on Sunday and Monday. Here are my photos from three polling places.

This morning I cast my ballot on one of these machines.

Privacy Theater

I have a piece in today’s NY Times “Room for Debate” feature, on whether the government should regulate Facebook. In writing the piece, I was looking for a pithy way to express the problems with today’s notice-and-consent model for online privacy. After some thought, I settled on “privacy theater”.

Bruce Schneier has popularized the term “security theater,” denoting security measures that look impressive but don’t actually protect us—they create the appearance of security but not the reality. When a security guard asks to see your ID but doesn’t do more than glance at it, that’s security theater. Much of what happens at airport checkpoints is security theater too.

Privacy theater is the same concept, applied to privacy. Facebook’s privacy policy runs to almost 6000 words of dense legalese. We are all supposed to have read it and agreed to accept its terms. But that’s just theater. Hardly any of us have actually read privacy policies, and even fewer consider carefully their provisions. As I wrote in the Times piece, we pretend to have read sites’ privacy policies, and the sites pretend that we have understood and consented to all of their terms. It’s privacy theater.

Worse yet. privacy policies are subject to change. When sites change their policies, we get another round of privacy theater, in which sites pretend to notify us of the changes, and we pretend to consider them before continuing our use of the site.

And yet, if we’re going to replace the notice-and-consent model, we need something else to put in its place. At this point, It’s hard to see what that might be. It might help to set up default rules, on the theory that a policy that states how it differs from the default might be shorter and simpler than a stand-alone policy, but that approach will only go so far.

In the end, we may be stuck with privacy theater, just as we’re often stuck with security theater. If we can’t provide the reality of privacy or security, we can settle for theater, which at least makes us feel a bit better about our vulnerability.

Developing Texts Like We Develop Software

Recently I was asked to speak at a conference for university librarians, about how the future of academic publication looks to me as a computer scientist. It’s an interesting question. What do computer scientists have to teach humanists about how to write? Surely not our elegant prose style.

There is something distinctive about how computer scientists write: we tend to use software development tools to “develop” our texts. This seems natural to us. A software program, after all, is just a big text, and the software developers are the authors of the text. If a tool is good for developing the large, complex, finicky text that is a program, why not use it for more traditional texts as well?

Like software developers, computer scientist writers tend to use version control systems. These are software tools that track and manage different versions of a text. What makes them valuable is not just the ability to “roll back” to old versions — you can get that (albeit awkwardly) by keeping multiple copies of a file. The big win with version control tools is the level of control they give you. Who wrote this line? What did Joe write last Tuesday? Notify me every time section 4 changes. Undo the changes Fred made last Wednesday, but leave all subsequent changes in place. And so on. Version control systems are a much more powerful relative of the “track changes” and “review” features of standard word processors.

Another big advantage of advanced version control is that it enables parallel development, a style of operation in which multiple people can work on the text, separately, at the same time. Of course, it’s easy to work in parallel. What’s hard is to merge the parallel changes into a coherent final product — which is a huge pain in the neck with traditional editing tools, but is easy and natural with a good version control system. Parallel development lets you turn out a high-quality product faster — it’s a necessity when you have hundred or thousands of programmers working on the same product — and it vastly reduces the amount of human effort spent on coordination. You still need coordination, of course, but you can focus it where it matters, on the conceptual clarity of the document, without getting distracted by version-wrangling.

Interestingly, version control and parallel development turn out to be useful even for single-author works. Version control lets you undo your mistakes, and to reconstruct the history of a problematic section. Parallel development is useful if you want to try an experiment — what happens if I swap sections 3 and 4? — and try out this new approach for a while yet retain the ability to accept or reject the experiment as a whole. These tools are so useful that experienced computer scientists tend to use them to write almost anything longer than a blog post.

While version control and parallel development have become standard in computer science writing, there are other software development practices that are only starting to cross the line into CS writing: issue tracking and the release early and often strategy.

Issue tracking systems are used to keep track of problems, bugs, and other issues that need to be addressed in a text. As with version control, you can do this manually, or rely on a simple to-do list, but specialized tools are more powerful and give you better control and better visibility into the past. As with software, issues can range from small problems (our terminology for X is confusing) to larger challenges (it would be nice if our dataset were bigger).

“Release early and often” is a strategy for rapidly improving a text by making it available to users (or readers), getting feedback, and rapidly turning out a new version that addresses the feedback. Users’ critiques become issues in the issue tracking system; authors modify the text to address the most urgent issues; and a new version is released as soon as the text stabilizes. The result is rapid improvement, aligned with the true desires of users. This approach requires the right attitude from users, who need to be willing to tolerate problems, in exchange for a promise that their critiques will be addressed promptly.

What does all of this mean for writers who are not computer scientists? I won’t be so bold as to say that the future of writing will be just exactly like software development. But I do think that the tools and techniques of software development, which are already widely used by computer scientist writers, will diffuse into more common usage. It will be hard to retrofit them into today’s large, well-established editing software, but as writing tools move into the cloud, I wouldn’t be surprised to see them take on more of the attributes of today’s software development tools.

One consequence of using these tools is that you end up with a fairly complete record of how the text developed over time, and why. Imagine having a record like that for the great works of the past. We could know what the writer did every hour, every day while writing. We could know which issues and problems the author perceived in earlier versions of the text, and how these were addressed. We could know which issues the author saw as still unfixed in the final published text. This kind of visibility will be available into our future writing — assuming we produce works that are worthy of study.

India's Electronic Voting Machines Have Security Problems

A team led by Hari Prasad, Alex Halderman, and Rop Gonggrijp released today a technical paper detailing serious security problems with the electronic voting machines (EVMs) used in India.

The independent Electoral Commission of India, which is generally well respected, has dealt poorly with previous questions about EVM security. The chair of the Electoral Commission has called the machines “infallible” and “perfect” and has rejected any suggestion that security improvements are even possible. I hope the new study will cause the EC to take a more realistic approach to EVM security.

The researchers got their hands on a real Indian EVM which they were able to examine and analyze. They were unable to extract the software running in the machine (because that would have required rendering the machine unusable for elections, which they had agreed not to do) so their analysis focused on the hardware. They were able to identify several attacks that manipulated the hardware, either by replacing components or by clamping something on to a chip on the motherboard to modify votes. They implemented demonstration attacks, actually building proof-of-concept substitute hardware and vote-manipulation devices.

Perhaps the most interesting aspect of India’s EVMs is how simple they are. Simplicity is a virtue in security as in engineering generally, and researchers (including me) who have studied US voting machines have advocated simplifying their design. India’s EVMs show that while simplicity is good, it’s not enough. Unless there is some way to audit or verify the votes, even a simple system is subject to manipulation.

If you’re interested in the details, please read the team’s paper.

The ball is now in the Election Commission’s court. Let’s hope that they take steps to address the EVM problems, to give the citizens of the world’s largest democracy the transparent and accurate elections they deserve.