February 21, 2018

Archives for March 2007

Protect E-Voting — Support H.R. 811

After a long fight, we have reached the point where a major e-voting reform bill has a chance to become U.S. law. I’m referring to HR 811, sponsored by my Congressman, Rush Holt, and co-sponsored by many others. After reading the bill carefully, and discussing with students and colleagues the arguments of its supporters and critics, I am convinced that it is a very good bill that deserves our support.

The main provisions of the bill would require e-voting technologies to have a paper ballot that is (a) voter-verified, (b) privacy-preserving, and (c) durable. Paper ballots would be hand-recounted, and compared to the electronic count, at randomly-selected precincts after every election.

The most important decision in writing such a bill is which technologies should be categorically banned. The bill would allow (properly designed) optical scan systems, touch-screen systems with a suitable paper trail, and all-paper systems. Paperless touchscreens and lever machines would be banned.

Some activists have argued that the bill doesn’t go far enough. A few say that all use of computers in voting should be banned. I think that’s a mistake, because it sacrifices the security benefits computers can provide, if they’re used well.

Others argue that touch-screen voting machines should be banned even if they have good paper trails. I think that goes too far. Touchscreens can be a useful part of a good voting system, if they’re used in the right context and with a good paper trail. We shouldn’t let the worst of today’s insecure paperless touchscreens – machines that should never have been certified in the first place, and anyway would be banned by the Holt Bill for lacking a suitable paper ballot – sour us on the better uses of touchscreens that are possible.

One of the best parts of the bill is its random audit requirement, which selects 3% of precincts (or more in close races) at which the paper ballots will be hand counted and compared to the electronic records. This serves two useful purposes: detecting error or fraud that might have affected the election result, and providing a routine quality-control check on the vote-counting process. This part of the bill reflects a balance between the states’ freedom to run their own elections and the national interest in sound election management.

On the whole this is a good, strong bill. I support it, and I urge you to support it too.

How I Became a Policy Wonk

It’s All-Request Friday, when I blog on topics suggested by readers. David Molnar writes,

I’d be interested to hear your thoughts on how your work has come to have significant interface with public policy questions. Was this a conscious decision, did it “just happen,” or somewhere in between? Is this the kind of work you thought you’d be doing when you first set out to do research? What would you do differently, if you could do it again, and what in retrospect were the really good decisions you made?

I’ll address most of this today, leaving the last sentence for another day.

When I started out in research, I had no idea public policy would become a focus of my work. The switch wasn’t so much a conscious decision as a gradual realization that events and curiosity had led me into a new area. This kind of thing happens all the time in research: we stumble around until we reach an interesting result and then, with the benefit of hindsight, we construct a just-so story explaining why that result was natural and inevitable. If the result is really good, then the just-so story is right, in a sense – it justifies the result and it explains how we would have gotten there if only we hadn’t been so clueless at the start.

My just-so story has me figuring out three things. (1) Policy is deep and interesting. (2) Policy affects me directly. (3) Policy and computer security are deeply connected.

Working on the Microsoft case first taught me that policy is deep and interesting. The case raised obvious public policy issues that required deep legal, economic, and technical thinking, and deep connections between the three, to figure out. As a primary technical advisor to the Department of Justice, I got to talk to top-notch lawyers and economists about these issues. What were the real-world consequences of Microsoft doing X? Would would be the consequences if they were no longer allowed to do Y? Theories weren’t enough because concrete decisions had to be made (not by me, of course, but I saw more of the decision-making process than most people did). These debates opened a window for me, and I saw in a new way the complex flow from computer science in the lab to computer products in the market. I saw, too, how public policy modulates this flow.

The DMCA taught me that policy affects me directly. The first time I saw a draft of the DMCA, before it was even law, I knew it would mean trouble for researchers, and I joined a coalition of researchers who tried to get a research exemption inserted. The DMCA statute we got was not as bad as some of the drafts, but it was still problematic. As fate would have it, my own research triggered the first legal battle to protect research from DMCA overreaching. That was another formative experience.

The third realization, that policy and computer security are joined at the hip, can’t be tied to any one experience but dawned on me slowly. I used to tell people at cocktail parties, after I had said I work on computer security and they had asked what in the world that meant, that computer security is “the study of who can do what to whom online.” This would trigger either an interesting conversation or an abrupt change of topic. What I didn’t know until somebody pointed it out was that Lenin had postulated “who can do what to whom” (and the shorthand “who-whom”) as the key question to ask in politics. And Lenin, though a terrible role model, did know a thing or two about political power struggles.

More to the point, it seems that almost every computer security problem I work on has a policy angle, and almost every policy problem I work on has a computer security angle. Policy and security try, by different means, to control what people can do, to protect people from harmful acts and actors, and to ensure freedom of action where it is desired. Working on security makes my policy work better, and vice versa. Many of the computer scientists who are most involved in policy debates come from the security community. This is not an accident but reflects the deep connections between the two fields.

(Have another topic to suggest for All-Request Friday? Suggest it in the comments here.)

How Computers Can Make Voting More Secure

By now there is overwhelming evidence that today’s paperless computer-based voting technologies have such serious security and reliability problems that we should not be using them. Computers can’t do the job by themselves; but what role should they play in voting?

It’s tempting to eliminate computers entirely, returning to old-fashioned paper voting, but I think this is a mistake. Paper has an important role, as I’ll describe below, but paper systems are subject to well-known problems such as ballot-box stuffing and chain voting, as well as other user-interface and logistical challenges.

Security does require some role for paper. Each vote must be recorded in a manner that is directly verified by the voter. And the system must be software-independent, meaning that its accuracy cannot rely on the correct functioning of any software system. Today’s paperless e-voting systems satisfy neither requirement, and the only practical way to meet the requirements is to use paper.

The proper role for computers, then, is to backstop the paper system, to improve it. What we want is not a computerized voting system, but a computer-augmented one.

This mindset changes how we think about the role of computers. Instead of trying to make computers do everything, we will look instead for weaknesses and gaps in the paper system, and ask how computers can plug them.

There are two main ways computers can help. The first is in helping voters cast their votes. Computers can check for errors in ballots, for example by detecting an invalid ballot while the voter is still in a position to fix it. Computers can present the ballot in audio format for the blind or illiterate, or in multiple languages. (Of course, badly designed computer interfaces can do harm, so we have to be careful.) There must be a voter-verified paper record at the end of the vote-casting process, but computers, used correctly, can help voters create and validate that record, by acting as ballot-marking devices or as scanners to help voters spot mismarked ballots.

The second way computers can help is by improving security. Usually the e-voting security debate is about how to keep computers from making security too much worse than it was before. Given the design of today’s e-voting systems, this is appropriate – just bringing these systems up to the level of security and reliability in (say) the Xbox and Wii game consoles would be nice. Even in a computer-augmented system, we’ll need to do a better job of vetting the computers’ design – if a job is worth doing with a computer, it’s worth doing correctly.

But once we adopt the mindset of augmenting a paper-based system, security looks less like a problem and more like an opportunity. We can look for the security weaknesses of paper-based systems, and ask how computers can help to address them. For example, paper-based systems are subject to ballot-box stuffing – how can computers reduce this risk?

Surprisingly, the designs of current e-voting technologies, even the ones with paper trails, don’t do all they can to compensate for the weaknesses of paper. For example, the current systems I’ve seen keep electronic records that are subject to straightforward post-election tampering. Researchers have studied approaches to this problem, but as far as I know none are used in practice.

In future posts, we’ll discuss design ideas for computer-augmented voting.