November 27, 2024

Good News: Election Error Found in California

From Kim Zetter at wired.com comes the story at of the recent Napa County, California election. Napa County uses paper ballots that are marked by the voter with a pen or pencil, and counted by an optical scanner machine.

Due to a miscalibrated scanner, some valid votes went uncounted, as the scanner failed to detect the markings on some ballots. The problem was discovered during a random recount of one percent of precincts. The ballots are now being recounted with properly calibrated scanners, and the recount might well affect the election’s result.

Although a mistake was made in configuring the one scanner, the good news is that the system was robust enough to catch the problem. The main source of this robustness lies in the paper record, which could be manually examined to determine whether there was a problem, and could be recounted later when a problem was found. Another important factor was the random one percent recount, which brought the problem to light.

Our biggest fear in designing election technology should not be that we’ll make a mistake, but that we’ll make a mistake and fail to notice it. Paper records and random recounts help us notice mistakes and recover from them. Paperless e-voting systems don’t.

Did I mention that the Holt e-voting bill, H.R. 2239, requires paper trails and random recounts?

[Link via Peter Neumann’s RISKS Forum.]

Solum's Response on .mobile

Larry Solum, at Legal Theory Blog, responds to my .mobile post from yesterday. He also points to a recently published paper he co-authored with Karl Mannheim. The paper looks really interesting.

Solum’s argument is essentially that creating .mobile would be an experiment, and that the experiment won’t hurt anybody. If nobody adopts .mobile, the experiment will have no effect at all. And if some people like .mobile and some don’t, those who like it will benefit and the others won’t be harmed. So why not try the experiment? (Karl-Friedrich Lenz made a similar comment.)

The Mannheim/Solum paper argues that ICANN should let a thousand gTLDs bloom, and should use auctions to allocate the new gTLDs. (gTLDs are Generic Top-Level Domains such as .com, .org, or .union) The paper argues persuasively for this policy.

If ICANN were following the Mannheim/Solum policy, or some approximation to it, I would agree with Solum’s argument and would be happy to see the .mobile experiment proceed. (But I would still bet on its failure.) No evidence for its viability would be needed, beyond the sponsors’ willingness to outbid others for the rights to that gTLD.

But today’s ICANN policy is to authorize very few gTLDs, and to allocate them administratively. In the context of today’s policy, and knowing that the creation of one new gTLD will be used to argue against the creation of others, I think a strong case needs to be made for any new gTLD. The proponents of .mobile have not made such a case. Certainly, they have not offered a convincing argument that theirs is the best way to allocate a new gTLD, or even that their is the best way to allocate the name .mobile.

Why We Don't Need .mobile

A group of companies is proposing the creation of a new Internet top level domain called “.mobile”, with rules that require sites in .mobile to be optimized for viewing on small-display devices like mobile phones.

This seems like a bad idea. A better approach is to let website authors create mobile-specific versions of their sites, but serve out those versions from ordinary .com addresses. A mobile version of weather.com, for example, would be served out from the weather.com address. The protocol used to fetch webpages, HTTP, already tells the server what kind of device the content will be displayed on, so the server could easily send different versions of a page to different devices. This lets every site have a single URL, rather than having to promote separate URLs for separate purposes; and it lets any page link to any other page with a single hyperlink, rather than an awkward “click here on mobile phones, or here on other devices” construction.

The .mobile proposal looks like a textbook example of Lessig’s point about how changing the architecture of the net can increase its regulability. .mobile would be a regulated space, in the sense that somebody would make rules controlling how sites in .mobile work. And this, I suspect, is the real purpose of .mobile – to give one group control over how mobile web technology develops. We’re better off without that control, letting the technology develop on its own over in the less regulated .com.

We already have a regulated subdomain, .kids.us, and that hasn’t worked out too well. Sites in .kids.us have to obey certain rules to keep them child-safe; but hardly any sites have joined .kids.us. Instead, child-safe sites have developed in .com and .org, and parents who want to limit what their kids see on the net just limit their kids to those sites.

If implemented, .mobile will probably suffer the same fate. Sites will choose not to pay extra for the privilege of being regulated. Instead, they’ll stay in .com and focus on improving their product.

An Inexhaustible Supply of Bugs

Eric Rescorla recently released an interesting paper analyzing data on the discovery of security bugs in popular products. I have some minor quibbles with the paper’s main argument (and I may write more about that later) but the data analysis alone makes the paper worth reading. Briefly, what Eric did is to take data about reported security vulnerabilities, and fit it to a standard model of software reliability. This allowed him to estimate the number of security bugs in popular software products and the rate at which those bugs will be found in the future.

When a product version is shipped, it contains a certain number of security bugs. Over time, some of these bugs are found and fixed. One hopes that the supply of bugs is depleted over time, so that it gets harder (for both the good guys and the bad guys) to find new bugs.

The first conclusion from Eric’s analysis is that there are many, many security bugs. This confirms the expectations of many security experts. My own rule of thumb is that typical release-quality industrial code has about one serious security bug per 3,000 lines of code. A product with tens of millions of lines of code will naturally have thousands of security bugs.

The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.

If true, this conclusion has profound implications for how we think about software security. It implies that once a version of a software product is shipped, there is nothing anybody can do to improve its security. Sure, we can (and should) apply software patches, but patching is just a treadmill and not a road to better security. No matter how many bugs we fix, the bad guys will find it just as easy to uncover new ones.

Suit Challenges Broadcast Flag

A lawsuit was filed last week, challenging the FCC’s Broadcast Flag decree. Petitioners include the American Library Association, several other library associations, the Consumer Federation of America, Consumers Union, the EFF, and PublicKnowledge. Here is a court filing outlining the petitioners’ arguments.