May 7, 2024

Archives for 2006

ThreeBallot and Write-Ins

Yesterday I wrote about Ron Rivest’s ThreeBallot voting system. Today I want to start a discussion of problems with the system. (To reiterate: the purpose of this kind of criticism is not to dump on the designer but to advance our collective understanding of voting system design.) Charlie Strauss and Andrew Appel have more thorough criticisms, which I’ll get to in future posts. Today I want to explain what I think is the simplest problem with ThreeBallot: it has no natural way to handle write-in votes.

(For background on how ThreeBallot works, see the previous post.)

The basic principle of ThreeBallot voting is that each voter fills out three ballots. Every candidate’s name must be marked on either one or two of the three ballots – to vote for a candidate you mark that candidate on exactly two of the three ballots; all other candidates get marked on exactly one of the three ballots. The correctness of ThreeBallot depends on what I’ll call the Constraint: each voter creates at least one mark, and no more than two marks, for each candidate.

But how can we maintain the Constraint for write-in candidates? The no-more-than-two part is easy, but the at-least-one part seems impossible. If some joker writes in Homer Simpson on two of his ballots, does that force me and every other voter to write in Homer on one of my three ballots? And how could I, voting in the morning, know whether somebody will write in Homer later in the day?

We could give up on the Constraint for write-in candidates. But the desirable features of ThreeBallot – the combination of auditability and secrecy – depend on the Constraint.

In particular, it’s the at-least-one part of the Constraint that allows you to take home a copy of one of your ballots as a receipt. Because you have to mark at least one ballot for every candidate, a receipt showing that you marked one ballot for a particular candidate (a) doesn’t force you to help that candidate, and (b) doesn’t prove anything about how you really voted – and that’s why it’s safe to let you take a receipt. If we throw out the at-least-one rule for write-ins, then a receipt showing a write-in is proof that you really voted for that write-in candidate. And that kind of proof opens the door to coercion and vote-buying.

Alternatively, we can declare that people who cast write-in votes don’t get to take receipts. But then the mere existence of your receipt is proof that you didn’t vote for any write-in candidates. I don’t see any way out of this problem. Do you?

There’s an interesting lesson here about election security, and security in general. Systems that work well in the normal case often get in trouble when they try to handle exceptional or unusual cases. The more complicated the system is, the more likely such problems seem to be.

In the next post I’ll talk about some other instructive problems with ThreeBallot.

ThreeBallot

ThreeBallot is a new voting method from Ron Rivest that is supposed to make elections more secure without compromising voter privacy. It got favorable reviews at first – Michael Shamos even endorsed it at a congressional hearing – but further analysis shows that it has some serious problems. The story of ThreeBallot and its difficulties is a good illustration of why voting security is hard, and also (I hope) an interesting story in its own right.

One reason secure voting is hard is that it must meet seemingly contradictory goals. On the one hand, votes must be counted as cast, meaning the vote totals reported at the end are the correct summation of the ballots actually cast by the voters. The obvious way to guarantee this is to build a careful audit trail connecting each voter to a ballot and each ballot to the final tally. But this is at odds with the secret ballot requirement, which says that there can be no way to connect a particular voter to a particular ballot. Importantly, this last requirement must hold even if the voter wants to reveal his ballot, because letting a voter prove how he voted opens the door to coercion and vote-buying.

If we were willing to abandon the secret ballot, we could help secure elections by giving each voter a receipt to take home, with a unique serial number and the list of votes cast, and publishing all of the ballots (with serial numbers) on the web after the election. A voter could then check his receipt against the online ballot-list, and complain if they didn’t match. But of course the receipt violates the secret-ballot requirement.

Rivest tries to work around this by having the voter fill out three ballots. To vote for a candidate, the voter marks that candidate on exactly two of the three ballots. To vote against a candidate, the voter marks that candidate on exactly one ballot. All three ballots are put in the ballot box, but the voter gets to take home a copy of one of them (he chooses which one). At the end of election day, all ballots are published, and the voter can compare the ballot-copy he kept against the published list. If anybody modifies a ballot after it is cast, there is a one-third chance that the voter will have a copy of that ballot and will therefore be able to detect the modification. (Or so the theory goes.)

Consider a two-candidate race between George Washington and Benedict Arnold. Alice wants to cast her vote for Washington, so she marks Washington on two ballots and Arnold on one. The key property is that Alice can choose to take home a copy of a Washington ballot or a copy of an Arnold ballot – so the mark on the ballot she takes doesn’t reveal her vote. Arnold’s crooked minions can offer to pay her, or threaten to harm her, if she doesn’t produce an Arnold ballot afterward, and she can satisfy them while still casting her vote for Washington.

ThreeBallot is a clever idea and a genuine contribution to the debate about voting methods. But we shouldn’t rush out and adopt it. Other researchers, including Charlie Strauss and Andrew Appel, have some pretty devastating criticisms of it. They’ll be the topic of my next post.

Dutch E-Voting System Has Problems Similar to Diebold's

A team of Dutch researchers, led by Rop Gonggrijp and Willem-Jan Hengeveld, managed to acquire and analyze a Nedap/Groenendaal e-voting machine used widely in the Netherlands and Germany. They report problems strikingly similar to the ones Ari Feldman, Alex Halderman and I found in the Diebold AccuVote-TS.

The N/G machines all seem to be opened by the same key, which is easily bought on the Internet – just like the Diebold machines.

The N/G machines can be put in maintenance mode by entering the secret password “GEHEIM” (which means “SECRET” in Dutch) – just as the Diebold machines used the secret PIN “1111” to enter supervisor mode.

The N/G machines are subject to software tampering, allowing a criminal to inject code that transfers votes undetectably from one candidate to another – just like the Diebold machines.

There are some differences. The N/G machines appear to be better designed (from a security standpoint), requiring an attacker to work harder to tamper with vote records. As an example, the electrical connection between the N/G machine and its external memory card is cleverly constructed to prevent the voting machine from undetectably modifying data on the card. This means that the strategy used by our Diebold virus, which allows votes to be recorded correctly but modifies them a few seconds later, would not work. Instead, a vote-stealing program has to block votes from being recorded, and then fill in the missing votes later, with “improvements”. This doesn’t raise the barrier to vote-stealing much, but at least the machine’s designers considered the problem and did something constructive.

The Dutch paper has an interesting section on electromagnetic emanations from voting machines. Like other computers, voting machines emit radio waves that can be picked up at a distance by somebody with the right equipment. It is well known that eavesdroppers can often exploit such emanations to “see” the display screen of a computer at a distance. Applied to voting machines, such an attack might allow a criminal to learn what the voter is seeing on the screen – and hence how he is voting – from across the room, or possibly from outside the polling place. The Dutch researchers’ work on this topic is preliminary, suggesting a likely security problem but not yet establishing it for certain. Other e-voting machines are likely to have similar problems.

What is most striking here is that different e-voting machines from different companies seem to have such similar problems. Some of the technical challenges in designing an e-voting machine are very difficult or even infeasible to address; and it’s not surprising to see those problems unsolved in every machine. But to see the same simple, easily avoided weaknesses, such as the use of identical widely-available keys and weak passwords/PINs, popping up again, has to teach a deeper lesson.

Immunizing the Internet

Can computer crime be beneficial? That’s the question asked by a provocative note, “Immunizing the Internet, or: How I Learned to Stop Worrying and Love the Worm,” by an anonymous author in June’s Harvard Law Review. The note argues that some network attacks, though illegal, can be beneficial in the long run by bringing attention to network vulnerabilities and motivating organizations to address problems.

I don’t buy the note’s argument, but there is a grain of truth behind it. Vendors and independent analysts often disagree about whether a vulnerability is real or could ever be exploited in practice. One thing I’ve learned over the years is that the best (and often the only) way to resolve that debate is to demonstrate an exploit. If you can do something, people will accept that it is possible.

Our recent e-voting study is a good example. Diebold can’t seriously argue that malicious code can’t sway an election, because we have a working demo that we have shown on national TV and in front of congress.

Even when the vendor is willing to acknowledge reality and work constructively to fix a problem, a working demonstration is useful in helping the vendor cope with the problem – and in helping the good guys within the vendor organization neutralize any internal minority that wants to deny the problem. Showing the vendor a working demo can be the first step in a constructive problem-solving relationship.

(To be clear: You can build a working demo and show it to people without revealing to the public every detail of how to build the exploit. How much information to publish about a demonstration exploit is a separate issue from whether to build it in the first place.)

But some sorts of problems can’t be demonstrated without breaking the law. For example, Diebold apparently claims that there is no way to tamper with the upcoming November election in (say) Maryland. I’m convinced that claim is false, but the most direct, obvious way to prove it false would involve actually tampering with the election, which of course is unthinkable.

The note’s reasoning would imply that the penalty for tampering with the election might be reduced, especially in cases where the tampering is engineered to be obvious and to cause minimal damage, for example if it added 10,000 write-in votes for Homer Simpson to a statewide race where a candidate was running unopposed. Though such an attack would be instructive, it would still be wrong and would deserve serious punishment. If the legal lines are drawn in the right places, and if the punishment otherwise fits the crime, then we shouldn’t let attackers off easy just because their attacks were instructive.

HP Spokesman Says Company Regrets Spying on Him

As most people know by now, Hewlett-Packard was recently caught spying on its directors and employees, and some reporters, using methods that are probably illegal and certainly unethical. Throughout the scandal, we’ve heard a lot from HP spokesman Mike Moeller. This got my attention because Mike was my next-door neighbor in Palo Alto during my sabbatical five years ago. Mike and I spent more than a few evening and weekend hours chatting over the fence.

Now it is reported that one of the targets of HP’s spying was … Mike Moeller. An HP internal email turned over to investigators says, “New monitoring system that captures AOL Instant Messaging is now up and running and deployed on Moeller’s computer”. The company also reportedly had a detective follow Mike at a trade show, and they acquired his private phone records.

I wouldn’t have figured Mike as the type to leak boardroom secrets to the press, and indeed the spies found he had done nothing improper.

What’s interesting is that he is still serving as spokesman for HP. I’m not sure what to make of this. He must have been unhappy about being targeted; who wouldn’t be? But the essence of the spokesman’s job is to stay on message – the company’s message, not your own. Resigning in anger is not the spokesmanlike thing to do, and can’t be a good career move.

Heads have rolled at HP over the spying incident – as they should have – but the investigation is far from over. Executives claim not to have known what was going on, and not to have known it might be illegal, but those claims are hard to believe. Why would the company’s lawyers have allowed this to happen without getting careful legal opinions in advance? The most plausible reason is that they didn’t want to find out whether the spying tactics were legal, just as the executives probably didn’t want to find out how the information they received had been collected.

Obviously HP is not the only organization that did this. The investigators HP hired had plenty of other customers, and they are only part of a larger industry of private spies. Obtaining others’ phone records by identity theft is common enough to have its own euphemism: “pretexting”.

After the fallout at HP, expect more revelations about spying by other organizations. People will be more alert for spying, and they’ll know that revealing it can bring down the mighty. Meanwhile, law enforcement will be prying open the records of the “investigators,” finding more examples of reputable organizations that wanted information but didn’t want to be told where it came from.

Eventually the scandal at HP will blow over, and Mike Moeller’s job will return to normal. But maybe he’ll think twice before sending that next email or instant message to his family.